The whole subject of API attacks is so big and dynamic that there was a lot I wasn’t able to cover in my recent webinar on “Understanding Broken Object Level Authorization: The Quiet Access Control Failure Undermining Today’s Apps”.
One area that is really interesting is the problems surrounding API keys and I want to start with the recent problem with Google Cloud Projects and how API Keys created years ago weren’t really “keys” in terms of being a secret key to any API or information at all – until they were. And while this story has to do with AI it’s not really an AI story. Here goes.
So, a Google Cloud Project is the basic organizational unit and foundational container for all Google Cloud services and resources. If you are familiar with Azure and know what a “subscription” is, you have a good idea of what a “project” is in Google Cloud.
Now, in Google Cloud you have API keys. These keys were used for billing service usage of things like Maps or Firebase that a Google customer’s cloud app consumed. Google for years said it was OK to embed these keys in publicly accessible source code like JavaScript. The biggest risk was someone could consume some Google Cloud service and you’d pay for it. There were ways to mitigate that risk with things like HTTP referrer restrictions etc. That was the extent of the risk if someone scavenged your API key and re-used it. Google Cloud API keys did NOT grant access to customer information. There were other security checks and other secrets (secret secrets as opposed to non secret secrets) that controlled access to customer resources and information.
Until Gemini came along. Gemini is of course Google’s generative AI platform – another API‑accessible service in Google Cloud. And Gemini used the same API key concept for billing just like other services like Maps and Firebase. And when you enabled Gemini in a Google Cloud Project, existing API keys were retroactively enabled for Gemini usage.
But here’s where it gets interesting. Gemini is a generative AI service that turns a Google Cloud project into a stateful AI system, capable of storing and reusing private inputs across requests.
To reduce hallucination, generative AI systems need to be grounded with domain‑specific information. And to reduce the cost of repeatedly transmitting and processing large inputs—such as documents, images, or transcripts—these systems need a way to persist that information and reference it later.
To support this, Gemini introduced project‑scoped storage and context reuse via two APIs: /files/, which stores uploaded artifacts, and /cachedContents/, which stores reusable prompt context.
Those API endpoints allow you to store and retrieve information that is potentially sensitive and certainly owned by your organization.
At that point, API keys that were designed only to identify a project for billing purposes had quietly become an authorization token for accessing stored data.
This issue was uncovered and responsibly disclosed by Truffle Security, whose researchers identified that long‑standing, publicly exposed Google Cloud API keys were suddenly able to access Gemini APIs once the Generative Language API was enabled in a project. After initially treating this behavior as intended, Google reclassified it as a bug and worked with Truffle Security during coordinated disclosure. Google has since implemented mitigations, including blocking known leaked keys from accessing Gemini, tightening defaults for newly created AI Studio keys, and adding proactive detection and notification mechanisms for exposed keys, while continuing work on a longer‑term fix to the underlying authorization model.
This is where the problem clearly crosses into Broken Object Level Authorization (BOLA) territory. Possession of an API key—never intended to authorize access to objects—became sufficient to enumerate and retrieve project‑scoped resources such as uploaded files and cached AI context. The access control decision effectively collapsed to “do you have the key,” with no object‑level authorization check separating legitimate callers from unintended ones.
If BOLA is still an unfamiliar concept, or if it feels abstract, I’d strongly recommend watching my recent webinar, “Understanding Broken Object Level Authorization: The Quiet Access Control Failure Undermining Today’s Apps,” because the Gemini incident is a textbook, real‑world example of how BOLA emerges not from bad code, but from subtle authorization design assumptions that no longer hold.
Thanks as always for reading and best wishes on security,
Randy Franklin Smith
Ultimate Windows Security is a division of Monterey Technology Group, Inc. ©2006-2026 Monterey Technology Group, All rights reserved. You may forward this email in its entirety but all other rights reserved.
9450 SW Gemini Drive #53822, Beaverton, OR 97008
Note: We do our best to provide quality information and expert commentary but use all information at your own risk.