- Forward Future AI
- Posts
- OpenAI Is Just a "Thin Layer"
OpenAI Is Just a "Thin Layer"
Everything OpenAI builds on top of ChatGPT is just a thin layer on ChatGPT
I’m 95% convinced that OpenAI’s models are the only actual moat OpenAI has. I’ve been thinking a lot about this, and it seems that every product they launch on top of their core models (GPT4, Dall-E, etc.) is just “thin layers.”
What is a “Thin Layer”?
Thousands of startups have been born in the last 12 months, built on OpenAI’s API. They’ve raised millions of VC dollars to attack the market as quickly as possible, but there has been lots of talk about these companies not having much differentiation. The core premise is that a vast majority of these AI companies’ value is the foundational model they are built upon, not the business logic built on top. And for the most part, “thin layer” companies aren’t building their own models. It’s too costly and time-intensive, especially when your competitor is going to market in days building on OpenAI’s API.
Many people argue that these types of companies have very little moat. It’s too easy to copy what they’re doing since building business logic on top of ChatGPT is so easy. There’s a counter-argument, though, which is that while all of this may be true, much of the value is domain-specific knowledge. Basically, building the interface, workflows, etc., that serve a specific niche incredibly well. This may be true, but time will tell.
Sounds Like OpenAI…
Now that we know what a “thin layer” AI company is, it sure sounds like every product OpenAI has shipped on top of their core models GPT and Dall-E. Let’s take a look at a few:
Code Interpreter - When Code Interpreter was first launched, it was mind-blowing. With natural language, you could prompt ChatGPT to complete tasks by writing code. As an example, you can say, “Show me a chart of performance over the last year. ChatGPT would write code, using Python, that created a chart with $TSLA’s performance. The real innovation was enabling a fully containerized environment for ChatGPT to execute code from and then iterating on code automatically when errors or bugs occurred.
All the functionality I just listed can be built by a 3rd party on top of OpenAI’s API, making Code Interpreter a “thin layer” AI product. In fact, an even more capable version was built called Open Interpreter, which allows you to run everything locally and use open-source models.
Custom GPTs - GPTs are OpenAI’s version of agents and are quite impressive and easy to build. But, once again, this product was almost definitely built directly on top of the public ChatGPT API. Using prompt engineering, custom system messages, RAG, open interpreter, web browsing, and Dall-E, anyone can re-create custom GPTs on top of ChatGPT’s API. Again, this is a “thin layer” product.
Web Browsing - ChatGPT being able to crawl the web was an incredibly important piece of functionality, giving ChatGPT the ability to have “up-to-date” information, leaping over their previous knowledge cut-off date restriction. But how do they actually do web browsing? They are most likely just using a web scraping library built in Python, scraping a website, and then inserting that information into the prompt along with your prompt. Again, this is another example of a “thin layer” product.
Sponsorship
Learn to Build AI Applications - No Code
Unlock the power of AI without any coding. Join a free webinar on 12/14 at 12 PM PT / 3 ET and learn how to build custom AI apps for your team.
The MindStudio team will walk through how to build out complete apps with AI generated prompts, custom automations, GPT, Claude, and Llama based workflows (plus workflows leveraging multiple AI models) and even the ability to query your own data sources. Whether you're a seasoned AI enthusiast or a novice, this webinar will give you the skills to launch your own AI applications quickly.
Reserve your spot now!
Is That Good Or Bad?
Let’s assume for a minute that OpenAI’s moat is strictly their models and no product or feature they build on top. Even in this case, they have the best, most capable model on the market. And for now, the competition isn’t too close (talking about GPT4). While I’m a huge believer in open-source AI, and open-source AI is rapidly gaining on GPT4, there are still a few remaining hurdles before we can call them equals.
At the same time, they have fierce competition from Meta, Apple, Google, Amazon, Anthropic, and more to come, who are well-funded and generally have access to the same sets of underlying data.
What OpenAI does have is an incredible team with some of the top minds in the industry. They ship incredibly quickly, especially after the drama over the last few weeks. They are likely more aligned and firing on all engines better than ever. They also have a vast user base, which is feeding them training data daily. However, that training data is decreasing as more people realize privacy is essential, especially for companies.
Conclusion
OpenAI is in a great position right now, both from a technology perspective and a team perspective. However, all of the features and products they are launching on top of ChatGPT and Dall-E are “thin layers” that can be cloned by any 3rd party on top of OpenAI’s API. OpenAI must capture as much of the market as possible as quickly as possible, but unfortunately, it seems they are using regulatory capture to do so.
So what can OpenAI do? They need to be the best developer-focused platform out there. They need to have the best closed-source models out there. And they can also contribute to the open-source community, bolstering their perception in the developer community. The price of their APIs will continue to be driven down, so they also have to have the best and easiest API out there, the “Stripe Strategy.”
I think it’s only a matter of time until Google, Amazon, Meta, and Apple have truly competitive closed-source and open-source models. I don’t know how OpenAI thrives in the long run in this situation.
Reply