At Google’s annual cloud conference in Las Vegas, the search company is looking to frame its unique role in the enterprise business as a mature and production-ready supplier of AI agents to power business applications.
Until recently, the battle for AI in the enterprise has been one dominated by hyperscalers and old-school software firms. But frontier AI labs Anthropic and OpenAI are increasingly moving downstream from models to applications, shaking up the field’s traditional focal points. Generative AI’s uncanny ability to write code has stoked widespread talk of AI’s potential to replace the tech workforce. Google will have more to say on the topic, but that is likely to come later this year at its I/O developer conference in May.
In my conversations with Google Cloud CEO Thomas Kurian ahead of the conference, he displayed a pragmatic self-awareness in identifying where Google Cloud is ahead, and where it is not.
“Some people are using the models to write code. They can use Gemini and also other tools like Claude,” he said. “But in other cases, we have unique things. There’s capability in the platform that nobody else offers.”
The attention is on Gemini Enterprise, a suite of tools Google provides for its cloud customers to deploy and manage AI. On Wednesday, Google announced that it was killing the name of Vertex AI, a tool that lets customers pick and choose which AI models to use, and folding the product under the Gemini Enterprise umbrella. In other words, a customer who chooses to use an Anthropic model in Google Cloud will be using Claude in Gemini Enterprise.
Google’s move suggests how it plans to compete for enterprise dollars: by building a moat around its software. Part of what makes Gemini Enterprise unique, Kurian said, is its governance capabilities for customers to manage security and compliance. As OpenAI and Anthropic enter the market with plug-ins and tools that let customers connect their AI models to other pieces of enterprise software, Google is responding by touting a richer, more enterprise-grade feature set.
“I think the model companies will build models that we will partner with them to distribute, and we will help enterprises access the intelligence of those models,” said Michael Gerstenhaber, a vice president of product whom Google hired away from Anthropic last year, in an interview.
As Google Cloud’s competition changes, it is stress testing what a symbiotic relationship with the AI labs will look like. Alphabet is one of the biggest investors in Anthropic, which is in turn encroaching on Google Cloud’s customer base. But the option to use Anthropic’s models in Google Cloud is also part of what keeps customers around. Onstage during the keynote, Kurian touted Google’s own models such as Gemini, video model Veo, and audio model Lyria — then specifically shouted out the new availability of Anthropic’s latest frontier model.
Brian Delahunty, Anthropic’s former head of engineering, whom Google also hired in 2025, described the AI race like a war with multiple fronts.
“You can be a model company; you can be an integrator; you can be a (software-as-a-service) company; you can be Google — a hyperscaler with models and infrastructure,” he said. “But I think that it doesn't mean you have to actually compete in every particular area that AI is going.”