This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read

Generative AI Innovation: Capturing Value Throughout the Tech Stack

Generative AI technologies, perhaps most prominently ChatGPT, have captured the world's attention in ways that cannot be dismissed as hype or narrow use cases. This a16z article describes the emerging tech stack for these technologies, highlighting how a significant portion of generative AI revenues currently go to infrastructure vendors, rather than companies that provide the end-user facing solutions where value may be most evident. 

However, it may be difficult to easily isolate where the greatest value lies in AI technologies. Stepping back a bit, the AI transformation of the last ten years has been driven by advances in three different areas coming together: (1) increased computing power and access to this power; (2) access to large amounts of relevant data; and (3) refinements to models and other key AI technologies.

Today, computing power may still be the rate-limiting factor for making useful generative AI accessible to a broad user base, particularly given the size of models like GPT-3 and the amount of data that they're trained with. Nevertheless, this leaves opportunities for new market entrants to refine generative AI models so that they depend less on computing power; in addition, innovators may be able to deliver vertical-specific solutions with fine-tuned models that provide greater value to customers without (significantly) increasing computing resource costs. In other words, even if there are no "systematic moats in generative AI" today, there is still plenty of time for the balance of value capture to shift throughout the tech stack as businesses find new ways to deliver impactful solutions.

In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it.

Tags

innovative technology