Generative AI Eight Months After the Launch of ChatGPT—Current Landscape, the Trajectory Forward, and Implications for the Enterprise

From credit card processing to powering early fleets of autonomous machines—AI has permeated our lives steadily over the last two decades. Against this backdrop of gradual progress, in November 2022 the world witnessed a sharp and transformational shift with OpenAI’s launch of ChatGPT. The progression from early machine learning and neural networks in the 1990s, to complex pattern interpretation via deep learning in 2010 for applications such as computer vision, to AI applied to restricted but high-intelligence tasks like AlphaGo in 2016, has laid the groundwork for large language models (LLMs) that are creative, communicative, context-aware and self-learning. The transformer architecture behind LLMs has been around since 2017. However, it was not until the bold product introduction of ChatGPT that the Generative AI chapter was officially ushered in.

While there is admittedly considerable hype around Gen AI, it has captured our collective imagination with its potential. Gen AI feels magical. It reveals glints of the elusive ‘artificial general intelligence’ (AGI), appearing at times to pass the Turing test of emulating human-like behavior. The ability for humans to now interact deeply with devices using natural language opens up unprecedented capabilities for productivity and creative expression. Unlike prior generations, Gen AI feels transcendent—impacting how individuals will conduct their daily lives, and applicable not just to consumers but also to every business segment. AI has, at last, come across as universally accessible.

In this post, we will share our perspectives on:

  1. Categorization of the Gen AI landscape with assessments of key segments
  2. How foundation models will continue to progress
  3. Implications for the enterprise
  4. The importance of partnering with founders to unlock value

Unpacking the current Generative AI Landscape

The first step in understanding the generative AI space is to map it out visually. We have categorized the technology stack into four progressive layers, starting from the bottom: core infrastructure (compute and vector databases), foundation models, developer/build tools, and horizontal business applications—with notable companies in each.

Within these groupings, we at Next47 aim to identify the greatest potential for 1) business value and customer ROI generation, 2) enduring differentiation, driven by technical, data, and execution advantages, and 3) drawing world-class founding teams. These help define the attractiveness of each category for new company creation.


The amount of compute and storage required to support a Gen AI-powered future is staggering—and these technology enablers headline the Infrastructure layer. While a number of exciting chip players have emerged in the past few years and we expect new architectures in the future, the NVIDIA GPU remains the dominant compute platform. Its scarcity is well-documented—with wait times for A100 and H100 clusters of several quarters or more, leading to creative pooling of scaled resources. With these dynamics, software that fuels compute efficiency will be prized (as evidenced by Databricks’ recent acquisition of MosaicML for $1.3B). Vector databases are instrumental for storing embeddings derived from custom datasets—and while proprietary approaches like Pinecone are doing well, we also see immense scope for more developer-friendly open-source options.

Foundation Models

The Foundation Model layer has drawn some of the most exceptional technical talent to date. While building new generalized LLMs demands deep AI expertise, training and operating them can be tough on the wallet. As a result, the first wave of foundation model companies—such as Cohere, Anthropic, Adept, and Inflection—that have raised mega-rounds to support astronomical R&D costs are quickly building not just technical, but also capital moats. The best-funded, highest-valuation startups are competing against not just OpenAI, but also technology heavyweights including Google and Meta, and open-source titans such as HuggingFace. We see this segment as challenging for new entrants.

Developer & Business Tools

Managing a Gen AI program—which involves data curation and pipelining, models, training, optimization, tie-in with existing applications, etc.—is complex, especially at scale. Embarking on such an effort requires major new commitment on top of existing digitalization and cloud-first initiatives. This creates strong potential in the Developer and Business Tools layer, targeted for not just ML/AI but also developer and IT teams. Included here are chaining frameworks that leverage LLMs in sequence for application building and also tools that supercharge software development—such as code generation, where we see high potential given programming’s deterministic nature, lower likelihood for models to go off the rails based on subjective inputs, and early success of frameworks like Copilot. Two of the most attractive aspects of this category are that 1) budgets are more likely to be pre-established and 2) enterprises are prioritizing privacy/security considerations. Startups here directly tap into the enterprise movement to “roll your own,” which carries hefty momentum—more on this below.

Horizontal Business Applications

Within the realm of Horizontal Applications, in the less specialized use cases—e.g. brand and marketing, enterprise search and summarization, knowledge agents—our view is that current category leaders have a meaningful execution and head-start advantage. We see greater potential in the top-most layer of the stack—segments that require greater specialized knowledge or proprietary data sets or benefit from network effects, that will yield superior models. Further, the future of vertical applications is largely yet to be defined—use of Gen AI in industrials, finance, healthcare, logistics, transportation, and more remains nascent. Across all application segments, we see scope for re-invention of the user interface and believe that a common distinction across winners will be creative and customer-delighting UIs.


Model improvement will continue at high velocity

We expect foundation model development to follow a steep slope. Enhancements in accuracy, token window, multimodality, and more will be relatively quick; the progression from GPT-3 to -4 as a specific example is illustrative of this speed. Other advances—including deeper problem-solving capabilities—will be driven by:

  1. Enhanced curation/qualification of data sets
  2. Greater incorporation of human feedback in learning loops
  3. Improved structural guide rails to reduce hallucination
  4. Rich innovation around new architectures, including sparse mixture-of-experts

Overall, we expect an explosion of model sizes and types, with certain classes of models beginning to commoditize. Google exemplifies the diversity trend with its recently-launched PaLM-2 family covering a range of use cases, and upcoming mega-model set, Gemini. Proprietary and open-source flavors will both flourish. Considerations around model selection will encompass not just accuracy and latency, but total cost of ownership consisting of training, inference, and other IT components.

While enormous, monolithic, general-purpose models will continue to proliferate, some of the greatest business value will be driven by smaller, domain-specific variants that excel at specialized tasks and have lower training and inference costs. Among the PaLM models from Google noted above are Med- and Sec-PaLM, specially-trained for healthcare and cybersecurity, respectively. Bloomberg’s eponymous finance-oriented GPT model, as an overlay to its Terminal product, exemplifies domain-specific functionality trained with proprietary data and pulled together internally. The story is similar to one of our portfolio companies,, which just launched its 30 billion parameter contact center LLM trained on its vast dataset of global contact center calls. We expect more enterprises to follow suit.

Models have become increasingly multimodal and the trend will continue. Text, voice, and visuals will be treated with near-interchangeability going forward. Burgeoning volumes of new LLM-spawned text is already leading to the emergence of text-to-speech and -images/video as valuable categories.


The far-reaching implications for enterprise

With OpenAI striking first, consumer-facing applications have helped popularize the Gen AI wave—many as plug-ins to or wrappers around ChatGPT. The power of the technology is now evident and enterprises are at various stages of deciphering how to leverage it, with B2C moving with more urgency than B2B. We find ourselves in a special time window where the technology is available, but most enterprises have yet to launch broadly. They are operating in fast-learning mode and asking questions about how much to develop and control internally vs. buying off the shelf.

Our conversations at Next47 with enterprises reveal that many are aiming for deployment in two phases, progressing the risk level as they learn about the potential and pitfalls. 

Phase One: Running experiments and using Gen AI for internal use and productivity gains, with applications such as document summarization, internal knowledge management/search and note-taking. 

Phase Two: Incorporating the technology into external-facing products for the benefit of customers. 

Some are also pursuing both Phases in parallel, with the breadth and availability of new tools making it ever-easier to spin up limited prototypes and collect quick feedback. It is remarkable to witness the sheer breadth of verticals leaning in—from traditionally fast-adopting enterprise software companies to industrial players to those focused on deeptech/robotics.

As enterprises explore what is possible, they will follow a spectrum of options. Some of the more technically sophisticated organizations will deploy developer teams to roll their own solutions. This movement has been in part inspired by the open-sourcing of Meta’s LLaMA model family in March that has led to a flurry of enhanced training and fine-tuning—with techniques such as low-rank adaptation (LoRA) and RLHF approaches entering the picture along with an ensuing series of South American camelid-monikered models. Another strategy is a partially-open stance—for example, using proprietary data sets to create native embeddings in internal vector databases and using APIs to external models to power applications. A third group of enterprises will elect to buy full-stack solutions off the shelf, but often deploy them in private instances with safeguards around data sharing. While internal development and more granular control carry inherent advantages, many enterprises report that it is more difficult than expected to spin up open-source-based solutions.

Developing a Gen AI strategy has become table-stakes. Most enterprises are beginning to view it not just as a matter of competitive advantage, but survival. AI and digital natives will have a natural head start while others scramble to maintain pace. While we have highlighted select technology categories that we believe to be attractive, our overarching thesis is that this “race to keep up” creates tremendous opportunity for startup founders to innovate across the entire stack.


All-in together on Gen AI

As outlined above, the Gen AI space is complex and fast-moving, and we see a unique opportunity for startups to define its future. As such, rolling up our sleeves and working together with founders is an essential foundation for our work in Gen AI.

Looking forward, we strive to build community amongst builders and customers in the space. For example, in May we hosted a Generative AI Unlocked event for over 130 founders and thought leaders covering topics from the future of models and training/inference compute, to practical applications of LLMs in cybersecurity, privacy/compliance, and contact centers. Speakers included Saurabh Baji (Cohere), Andy Hock (Cerebras), Hila Segal (Observe.AI), DJ Sampath (Armorblox), and Cathy Polinsky (DataGrail) who shared candid and hard-earned lessons from working at the frontiers of Gen AI.

We have much more planned for the future. At Next47, our aim is to continue partnering with the most promising enterprise companies in Gen AI, globally. The free exchange of ideas will be key to pushing the frontiers of the space forward—and one of the key insights that Next47 can bring to founders is our understanding of enterprise customers. If you are building a compelling product in Generative AI, we would love to hear from you: debjit.mukerji(at), If you would like to be added to our mailing list for future AI events, please send a note to

Footnote: Note that ChatGPT was not invoked in the composition of this piece; all of its shortcomings should therefore be attributed squarely to the authors. We appreciate the founders and enterprise customers, including those in our portfolio, who have generously informed this market landscape; and a special thanks to the following thought leaders who provided critical review: Eli Collins, Swapnil Jain, Justin Ho, Oliver Cameron, and Diego Oppenheimer.