January 25, 2024
AI-t's raining brands: The top generative-AI brands in 2024
AI-t's raining brands: The top generative-AI brands in 2024


This article was co-written by David Barnett and Rebecca Newman.

As the online buzz regarding artificial intelligence technologies[1] carries on into the new year, we take a look at which generative AI brands have the greatest degree of online presence at the start of 2024. In order to do so, we use our new methodology for measuring online prominence, as outlined in our recent analysis of the top 100 global brands[2] and subsequently applied in studies of the top fashion brands[3] and cryptocurrencies[4]. In this case, the analysis considers a set of 175 of the most popular generative AI brand names (including tools, models, etc.), drawn from various sources[5]. Some additional specifics on the details of the methodology are given in Appendix A.

From this analysis, the top thirty brands and their respective prominence scores (measuring their relative degrees of online prominence) are shown in Figure 1.


Figure 1: Prominence scores for the top thirty most prominent generative-AI brands


It is perhaps unsurprising that the very popular GPT / ChatGPT brand, and its parent company OpenAI, are the most prominent brands online by a significant margin. However, other well-known names, including Copilot, Bard, Gemini, DeepMind and Midjourney all have significant presences, all featuring within the top 12. It is also noteworthy that the high-profile brand names Bing Chat / Bing AI (Microsoft) and Grok[6] (xAI) are not currently well represented in the dataset of web content, with neither achieving top-30 placings (gaining scores of 0.165 (position 32) and 0.135 (position 35), respectively).


In terms of more in-depth trends, we also note the following:

  • Enterprise use – Jasper and Copilot achieve third and fourth place (respectively) in the overall ratings, with both BLOOM and Cohere in the top 15. This is consistent with the increasing focus on the application of AI within enterprise, where AI tools can provide templates, insights and increased workflow efficiency. As AI moves toward a phase of consistent productivity (rather than the initial spikes of hype and disillusionment), users are demanding tools with a practical application, which go beyond the immediate novelty factor of the first phase. Similar comments also apply to Copy AI (position 16) and ClickUp (position 21).

  • Relevance of open source – TensorFlow, HuggingFace and PyTorch (all in the top 20) are open-source repositories and communities for machine learning and AI. The prominence of these platforms reflects a growing recognition of open-source collaboration in AI development. LLaMA (position 24) has also been part of this conversation. The weights and starting code for LLaMA 2 were released amid much hype in Summer 2023, prompting accusations that Meta had leveraged the publicity around promising an open-source LLM whilst offering a model which is not truly open-source.

  • New complex content types – Synthesia, a platform for AI-generated video content, also scores highly (position 8), which is consistent with focus on the increasing capability of these complex media generation models. The same is also true of Murf (position 25), an AI voice generator.

  • The prominence of Google – Google models feature highly in the list, which is to be expected after their launch event in December last year, where they released the new Gemini LLM which now powers Bard, Gemini Nano and Gemini Pro. Google held off on their most anticipated model (Gemini Ultra, expected to outperform ChatGPT across a number of benchmarks), which is expected to be released in late Jan 2024. DeepMind – which has not been particularly prominent in the hype cycles over the last year, preferring to stay below the radar and continue its research, also features at position 9. This may in part be due to the publication of research in November, revealing that DeepMind AI tool GNoME has discovered nearly 400,000 new stable materials[7] which could power future technology. For context, this is a discovery of 2.2 million new crystal structures, compared to the 28,000 discovered in the last decade of human research.

  • Small LLMs (Large Language Models) – One trend we may expect to see going forward is a growth in prominence of ‘small’ or ‘lightweight’ LLMs which have smaller neural networks, fewer parameters and can be used offline and on mobile devices. Examples include Microsoft’s Orca 2-7b and Falcon 7b (Falcon currently appears only at position 97 in the list, and Orca at 156) and Gemini Nano (which, as an explicit brand-name phrase – i.e. a subset of the results for Gemini generally – appears at position 154).

The data shows that, as we enter 2024, the public interest in AI is focusing on real-world outputs. These are diverse – in this article alone we have touched on various fields, including content, enterprise and productivity and material sciences. There is increasingly a voice for the open-source proponents, not least because the models being developed are equalling (or even outperforming) the big players across a number of key benchmarks. The fight to secure adequate compute power is likely to continue, but we may see a counter-camp advocating for small, targeted and cheaper (for providers, users and arguably also the planet) technology, with the rise of the small language model.

The reproducibility of our methodology means that the same queries can be run in future studies, to provide a means by which changes in prominence over time can be quantified on a like-for-like basis. This will allow us to track the relative fortunes of the generative-AI key players over coming months, monitor new brands as they emerge, and determine how the forthcoming data reflects our predictions!


Appendix A: Methodology

  • We use a series of generic search queries[8] relating to generative AI, to bring back a sample of pages for analysis, and then measure the number and prominence of mentions of each AI brand on each page, using the ‘content scoring’ approach.
  • The overall prominence score for each brand is calculated as the mean of the content scores on each page (calculated across the whole dataset)[9].
  • In general, the matching is carried out on a ‘wildcard’ basis (i.e. allowing the reference to be counted even if the brand-term appears as only a sub-string within a longer word), since most of the brands are relatively distinctive, and it is desirable to be able to capture brand variations and adaptations, and consider them to be ‘references’ to the brand (e.g. for GPT, we wish to include references to GPT(-)4, GPT(-)5, ChatGPT, AutoGPT, etc.).
  • The risk of ‘false positives’ (i.e. references to the same brand names in unrelated contexts) is also reduced through the use of the subject-area-specific search queries.
  • However, for the less distinctive brand names, we make use of explicit filtering where necessary (e.g. for ‘Descript’, we require the string not to be suffixed with any additional alphabetical characters, so as to avoid counting uses of words such as ‘description’).



[1] https://www.iamstobbs.com/opinion/trends-in-web3-part-1-a-look-at-blockchain-domains

[2] https://www.iamstobbs.com/online-brand-prominence-and-sentiment-ebook

[3] https://www.iamstobbs.com/measuring-brand-prominence-of-fashion-brands-ebook

[4] ‘Further studies in brand prominence: a hidden trend in crypto’, forthcoming Stobbs blog post (link TBC)

[5] https://en.wikipedia.org/wiki/Generative_artificial_intelligence

























[6] https://www.iamstobbs.com/opinion/cant-stop-the-grok-domain-infringements-following-xs-ai-brand-launch

[7] https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/

[8] The queries consist of the terms ‘AI’, ‘artificial intelligence’ and ‘generative AI’, both in isolation and in combination with each of the terms ‘brands’, ‘tools’, ‘products’, ‘models’, and ‘applications’, and with each combination also submitted in conjunction with the terms ‘top’ and ‘popular’. URLs are taken from the first page of results from google.com.

[9] Findings are based on searches and analysis carried out on 05-Jan-2024 and 07 – 08-Jan-2024, respectively. The dataset consisted of 1,889 unique webpages, and the means are calculated across the subset of pages which were accessible via the automated analysis script.

Online Brand Enforcement /  Domains /  AI

Found this article interesting today?
Send us your thoughts: