- OpenAI’s growth this year has been unstoppable.
- Its next act with ChatGPT is about to become much tougher.
- The threat from rivals is rising and demand for more advanced AI could rub regulators the wrong way.
Next month marks the one-year anniversary of ChatGPT. It’s safe to say it has provided an unstoppable force of growth to OpenAI ever since.
The AI chatbot has been such a hit this year that OpenAI CEO Sam Altman reckons his company is on course to generate revenue of around $1.3 billion a year, he said in communication with staff last week, which was first reported by The Information.
It’s worth emphasizing just how staggering this is: OpenAI was a company whose sum total revenue in 2022 was $28 million. An almost fifty-fold increase in revenue is the stuff of dreams for just about any Silicon Valley startup.
Given that ChatGPT has been presented as a kind of superpower for anyone wielding it — whether it’s a newbie coder trying to blitz through lines of JavaScript or a marketer brainstorming a creative campaign — it’s little wonder OpenAI has enjoyed the growth it’s had.
But now comes the hard part.
Despite having a strong ally in a resurgent Microsoft, OpenAI faces threats on several fronts, which it will need to take seriously to keep up its blistering pace of growth.
The first challenge comes from outside: OpenAI needs to maintain a “moat” that threatens to shrink.
The concept of a moat is a popular one for tech companies seeking to keep competitors at bay. In practice, the companies establish big moats by developing products that are difficult to replicate.
Though OpenAI had a first-mover advantage by getting a buzzy, consumer-facing app like ChatGPT out ahead of the competition, rivals have been busy pouring resources towards their own versions — ones with the potential to diminish into OpenAI’s moat.
Google presents one of the biggest threats to OpenAI. Gemini, the search giant’s widely anticipated multimodal AI model, designed to rival ChatGPT’s underlying model GPT-4, is expected to be launched this year.
Meanwhile, the open-source community has been busy developing an alternative AI model. The aim is to replicate the performance of a commercially available tool like ChatGPT with its own version of something like GPT-4, but to offer it up for free.
If Gemini or an open-source equivalent of GPT-4 matches or surpasses its performance, OpenAI faces the possibility of an uphill battle to maintain its base of customers paying for a premium service.
The verdict is still out on whether new features like ChatGPT’s ability to “see, hear, and speak” are seen as game changers to users or mere gimmicks.
The second challenge for OpenAI, crucially, seems to be itself.
As my colleague Kai Xiang Teo reported, OpenAI has made a quiet change to the “core values” section of its careers page, choosing to remove the term “thoughtful,” while emphasizing its “AGI focus” and the need to be “intense and scrappy.”
Though the changes appear to be minor ones around semantics — startups are, after all, intense and scrappy — the intentional shift in language is a worrying signal about what it is OpenAI wants to prioritize.
For regulators, it could well be a sign that the ethics of AI development has slipped a rung or two down the priority ladder. This is something that won’t be taken too favorably, given growing calls from lawmakers across the world this year to approach AI’s advancement with caution.
This is likely to come to a head next month as world leaders prepare to meet at Bletchley Park in the UK for the first-of-its-kind AI safety summit. Putting safety on the backburner could well turn out to be foolish.
Read the full article here