ChatGPT creator OpenAI and Microsoft announced on Monday a reworking of their joint partnership agreements. The repositioning by both companies may pave the way for a new, more splintered coalition.
“Today, we are announcing an amended agreement to simplify our partnership and the way we work together, grounded in flexibility, certainty, and a focus on delivering the benefits of AI broadly,” stated OpenAI.
OpenAI will ultimately be able to deliver its services across cloud platforms other than Azure. By some measure, Microsoft will no longer pay a revenue share to OpenAI.
But OpenAI confirmed that revenue-share payments from OpenAI to Microsoft “continue through 2030, independent of OpenAI’s technology progress,” at the same percentage but subject to a total cap.
Who wins the endgame?
The calculation thrown up now must be – not necessarily on a like-for-like basis, given the different girth of both organizations – who wins out most in the endgame?
OpenAI will ultimately look to work with other cloud hyperscalers to broaden its enterprise reach. Equally (or, if not in balance, in return), Redmond wants to develop its own AI models and drive its Microsoft 365 Copilot Business offering.
Microsoft put $1 billion into OpenAI (and thus became its primary cloud partner) in 2019 and stated at the time that the two companies would, “Focus on building a computational platform in Azure of unprecedented scale, which will train and run increasingly advanced AI models, include hardware technologies that build on Microsoft’s supercomputing technology, and adhere to the two companies’ shared principles on ethics and trust.”
But that was then, and this is now. Microsoft will be looking to free up its ability to work with other AI platform specialists, most obviously including Anthropic and Google.
The company will also be looking to alleviate some of the ties to what has been reported to be an “increasingly strained” relationship. That tension largely resulted from OpenAI’s steadily increasing datacenter requirements relative to the total Microsoft Azure universe.
“Our Microsoft partnership has been foundational to our success. But it has also limited our ability to meet enterprises where they are – for many that’s Bedrock.” – OpenAI’s, Denise Dresser
A memo from OpenAI’s chief revenue officer, Denise Holland Dresser, was reported on CNBC this month, with the revenue chief saying, “Our Microsoft partnership has been foundational to our success. But it has also limited our ability to meet enterprises where they are – for many, that’s Bedrock.”
OpenAI, on any cloud?
For now, Microsoft remains OpenAI’s primary cloud partner, meaning that OpenAI products will ship first on Azure. OpenAI’s caveat here states that this is so, “Unless Microsoft cannot and chooses not to support the necessary capabilities. OpenAI can now serve all its products to customers across any cloud provider.”
Breaking down what has played out here, postdoctoral researcher Teodora Groza, writes on the Stanford Law School blog, and rather suitably calls the partnership out for what it really is, i.e., not an isolated island of collaboration between big tech players and up-and-coming AI firms, but the epitome of a trend that characterizes the entire AI landscape.
“From Anthropic to Inflection AI, there is hardly any AI firm known to the large public that has not received investments from Alphabet, Amazon, or Microsoft,” writes Groza. “In 2023, two-thirds of the funds raised by AI start-ups came from big tech players, suggesting that the latter may be taking over the role that had been traditionally fulfilled by venture capitalists.”
AWS: What’s next for agentic AI?
With OpenAI opening up to other cloud hyperscalers, a media alert from AWS today for an April 28th presentation reads, “Watch Matt Garman, CEO of AWS, Colleen Aubrey, SVP of Amazon applied AI solutions, and leaders from OpenAI in a candid conversation about what’s next with agentic AI.”
According to AWS, attendees can hear directly from AWS and OpenAI leaders in an “authentic conversation” about what agent capabilities mean for business and technology leaders, SaaS applications, and the need to adapt to fundamental shifts in constraints.
“This is a healthy move for Microsoft… The same logic runs in reverse for OpenAI. When it is battling for an enterprise account with Anthropic, that customer often has a strong preference for consuming models through Bedrock or Vertex…” – Zencoder CEO Andrew Filev.
All said and done, while OpenAI wants to “provide both companies the flexibility to pursue new opportunities,” Microsoft’s existing license will now be non-exclusive. Microsoft is also hedging all bets. The company has committed to “continue to participate directly in OpenAI’s growth” as a major shareholder.
The founder and CEO of Zencoder tells The New Stack that model training is a race with a constantly shifting lead. He says that, as much as the Microsoft-OpenAI partnership helped GitHub Copilot win early market share, that same exclusivity later hampered it and accelerated the bleeding of share to Claude.
“This is a healthy move for Microsoft,” Filev says. “Primarily because the company is really a three-business-in-one model, i.e., a cloud provider to some, an AI API provider to the subset who need the full selection of models, and an application vendor that needs that same full selection to compete. The same logic runs in reverse for OpenAI. When it is battling for an enterprise account with Anthropic, that customer often has a strong preference for consuming models through Bedrock or Vertex, which are already integrated into their security perimeter.”
Relationships, it’s complicated
Meanwhile, the Financial Times this March reported that Microsoft was “weighing legal action” against Amazon and OpenAI over a $50bn deal that could breach its own exclusive cloud partnership with the ChatGPT maker. This brouhaha concerns whether AWS is permitted to offer OpenAI’s new Frontier service, a commercial product for building, managing, and deploying autonomous AI agents with deep governance.
Talking of governance (and looking for the real winner here) and keeping data sovereignty in mind, if the splintering between OpenAI and Microsoft allows the industry to move away from cloud monoliths and embrace a more platform-agnostic approach to cloud services for AI infrastructure, then it could be the users themselves who ultimately win.
That might be a matter of infinite hope, but reserved judgments (rather than reserved instances) may be the best route for all.
The post Microsoft-OpenAI rewrite opens the door for Anthropic and Google