Make a Wishlist

Universal wish list

Christmas list

Birthday wish list

Make a Registry

Baby Registry

Wedding Registry

Christmas list

Birthday wish list

Personal wish list

Make a Wishlist

Universal wish list

Christmas list

Birthday wish list

Make a Registry

Baby Registry

Wedding Registry

Christmas list

Birthday wish list

Personal wish list

What OpenAI’s Dev Day Announcements mean for the Agent / Copilot market

Marcel Marais & David Wood

Nov 9, 2023

This week OpenAI hosted their first ever ‘dev day’ and the announcements have sent the AI world into somewhat of a frenzy. Many developers are trying to figure out which pieces of the AI stack are still required and which are now taken care of underneath OpenAIs ever-opaque API.

This may spell the death of some “ChatGPT for X” companies, while for others it will accelerate progress. In this article we comment specifically on the new ‘Assistants API’ and custom ‘GPTs’ and how we think they’ll change the agent and copilot market. 

What’s the difference between conversational LLMs and LLM-based agents?

First let’s make sure we’re on the same page: Conversational LLMs like the original ChatGPT released in 2022 are text in, text out. There’s no reasoning outside of the predictive model. They're great for quick Q&A and single-step tasks but don't "think ahead" or plan. 

LLM-based agents are more advanced systems that are composed of various ‘modules’ to perform complex tasks. At a high level these modules include: 

  • Profile: this defines the agent’s role or personality. For instance, an agent could take on the persona of an expert Python programmer, shaping its interactions according to that expertise.

  • Memory: This module enables the agent to store and recall information; this can range from traditional to more advanced vector databases.

  • Planning: Here, the agent can break down tasks into smaller, manageable steps.

  • Actions: This is the agent’s ability to interact with the world around it, using tools such as APIs, or performing activities like querying databases or executing code, resulting in real-world outcomes.

What is OpenAI’s ‘Assistants API’?

The assistants API is OpenAI’s way of simplifying the agent building process for developers. In their words ‘an Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries.’ The two big values adds here are ‘knowledge’ and ‘tools’. 

Knowledge is OpenAI’s streamlined version of retrieval, usually done by giving an LLM access to traditional or vector databases. You simply upload files via the API (many data types are supported, including images) and almost magically GPT has access to those files and can cite them! This is a fairly significant simplification over the current process which often involves writing custom pipelines, chunking text into smaller pieces and other configuration steps. 

Tools (i.e what’s used to perform the action steps mentioned earlier) are external services that the LLM can access and have effects other than text generation. This could be anything from a platform that sends emails (Zapier), to a service that makes reservations at a restaurant. A major addition for sure, but this has already been possible for a while (even within OpenAI’s ecosystem)

Who’s affected? 

  • ‘Data source connectors’: companies that simply give ChatGPT access to your own data on a small scale (PDFs, excel sheets) are going to struggle to compete with natively supported implementations. 

  • Tooling layers (langchain, llamaindex): most developers will likely find that the simplicity of the OpenAI API will meet most of their needs, and heavy abstraction layers will only be used for more specialised use cases.

  • Vector databases: the retrieval abilities of GPT will be sufficient for a lot of businesses. The overhead of managing an entire database might not be worth it for common / light weight use cases (think: customer support assistants).

Who’s fine?

  • Businesses with ‘data moats’ / large proprietary datasets that cannot easily be replicated. 

  • Companies with custom search needs. The new retrieval capability is great for simple RAG (Retrieval Augmented Generation) use cases, but gives you little control over how search occurs. For example, in how you split the data in your files into smaller pieces that fit within the context length (chunking). 

  • Companies who need to operate at scale. OpenAI currently states, “You can attach a maximum of 20 files per Assistant, and they can be at most 512 MB”.

  • Businesses that have strict privacy requirements. Passing sensitive user data off to a third party might not be desirable or even legal.

What are OpenAI’s custom ‘GPTs’?

This service is aimed at empowering end users to “create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills”. GPTs are akin to “the specialist you talk to about X” and plugins are analogous to your personal assistant looking things up that they don’t know on Google or via a specific app. It seems like this is essentially a simplified, no-code version of the Assistants API mentioned above. These GPTs will be distributed in a new GPT Store making it very easy to share your creations.

You will be able to customise the style of conversation someone has with your GPT and give it instructions to behave in specific ways. You’ll also be able to provide it with additional, specific knowledge to use in its responses. Conversations with the GPTs are accessed via the ChatGPT interface and are marked as being with a specific GPT. This marks custom GPTs as different from ChatGPTs Plugins which plug into the standard ChatGPT chat interface and allow ChatGPT to call the tool. 

It’s not clear yet if OpenAI plans to allow creators of customGPTs to charge users whilst, presumably, taking a cut and following in the footsteps of Apple’s very successful app store model. This hasn’t been the case with plugins yet though.

Who’s affected? 

  • ‘Thin layer wrapper startups’ that attempt to help you in a specific niche. Think of an ‘AI sales advisor’ which is really just a slightly different UI, a few custom prompts and access to high quality data on best practices.

  • Also-ran foundational LLM companies should also take note that this is OpenAI staking its clearest claim yet to be the one place consumers keep all their chatbots.

Who’s fine? 

  • Companies that depend on fine-tuned / highly specialised models.

  • Businesses that require custom UI/UX experiences (not all generative AI experiences will be chat based!)

  • Companies who need their GPT to have access to a large amount of data. As above, scaling this will be hard and will depend on the types of data required. 

Our final thoughts

We see these announcements as very positive for the agent and copilot market overall. By lowering the barrier to entry, individuals and businesses can begin to create conversational AI experiences with almost no technical expertise. Moreover, with the development of common chat use cases simplified, we believe there will be a greater focus on building truly unique generative AI experiences. In our opinion this will involve:

  • Carefully designed UXs and UIs, beyond chatbots, especially in dealing with images. 

  • Experiences that are deeply personalised through interactions over time.

  • Providing concrete value to users in a specialised domain. E.g “Moonsift helps me discover products I love and make purchase decisions in an unbiased way”

  • Huge proprietary datasets that are constantly evolving

Learn more about how Moonsift is building the world's first shopping Copilot for the entire internet