Following our April 2024 announcement, Device Bound Session Credentials (DBSC) is now entering public availability for Windows users on Chrome 146, and expanding to macOS in an upcoming Chrome release. This project represents a significant step forward in our ongoing efforts to combat session theft, which remains a prevalent threat in the modern security landscape.

Session theft typically occurs when a user inadvertently downloads malware onto their device. Once active, the malware can silently extract existing session cookies from the browser or wait for the user to log in to new accounts, before exfiltrating these tokens to an attacker-controlled server. Infostealer malware families, such as LummaC2, have become increasingly sophisticated at harvesting these credentials. Because cookies often have extended lifetimes, attackers can use them to gain unauthorized access to a user’s accounts without ever needing their passwords; this access is then often bundled, traded, or sold among threat actors.

Crucially, once sophisticated malware has gained access to a machine, it can read the local files and memory where browsers store authentication cookies. As a result, there is no reliable way to prevent cookie exfiltration using software alone on any operating system. Historically, mitigating session theft relied on detecting the stolen credentials after the fact using a complex set of abuse heuristics – a reactive approach that persistent attackers could often circumvent. DBSC fundamentally changes the web’s capability to defend against this threat by shifting the paradigm from reactive detection to proactive prevention, ensuring that successfully exfiltrated cookies cannot be used to access users’ accounts.

How DBSC Works

DBSC protects against session theft by cryptographically binding authentication sessions to a specific device. It does this using hardware-backed security modules, such as the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS, to generate a unique public/private key pair that cannot be exported from the machine. The issuance of new short-lived session cookies is contingent upon Chrome proving possession of the corresponding private key to the server. Because attackers cannot steal this key, any exfiltrated cookies quickly expire and become useless to those attackers. This design allows large and small websites to upgrade to secure, hardware-bound sessions by adding dedicated registration and refresh endpoints to their backends, while maintaining complete compatibility with their existing front-end. The browser handles the complex cryptography and cookie rotation in the background, allowing the web app to continue using standard cookies for access just as it always has.

Google rolled out an early version of this protocol over the last year. For sessions protected by DBSC, we have observed a significant reduction in session theft since its launch.

An overview of the DBSC protocol showing the interaction between the browser and server.

Private by design

A core tenet of the DBSC architecture is the preservation of user privacy. Each session is backed by a distinct key, preventing websites from using these credentials to correlate a user’s activity across different sessions or sites on the same device. Furthermore, the protocol is designed to be lean: it does not leak device identifiers or attestation data to the server beyond the per-session public key required to certify proof of possession. This minimal information exchange ensures DBSC helps secure sessions without enabling cross-site tracking or acting as a device fingerprinting mechanism.

Engagement with the ecosystem

DBSC was designed from the beginning to be an open web standard through the W3C process and adoption by the Web Application Security Working Group. Through this process we partnered with Microsoft to design the standard to ensure it works for the web and got input from many in the industry that are responsible for web security.

Additionally, over the past year, we have also conducted two Origin Trials to ensure DBSC effectively serves the requirements of the broader web community. Many web platforms, including Okta, actively participated in these trials and their own testing and provided essential feedback to ensure the protocol effectively addresses their diverse needs.

If you are a web developer and are looking for a way to secure your users against session theft, refer to our developer guide for implementation details. Additionally, all the details about DBSC can be found on the spec and the corresponding github. Feel free to use the issues page to report bugs or provide feature requests.

Future improvements

As we continue to evolve the DBSC standard, future iterations will focus on increasing support across diverse ecosystems and introducing advanced capabilities tailored for complex enterprise environments. Key areas of ongoing development include:

  • Securing Federated Identity: In modern enterprise environments, Single Sign-On (SSO) is ubiquitous. We are expanding the DBSC protocol to support cross-origin bindings, ensuring that a relying party (RP) session remains continuously bound to the same original device key used by the Identity Provider (IdP). This guarantees that the high-assurance security of the initial device binding is maintained throughout the entire federated login process, creating an unbroken chain of trust.
  • Advanced Registration Capabilities: While DBSC provides robust protection for established cookies, some environments require an even stronger foundation when the session is first created. We are developing mechanisms to bind DBSC sessions to pre-existing, trusted key material rather than generating a new key at sign-in. This advanced capability enables websites to integrate complementary technologies, such as mTLS certificates or hardware security keys, creating a highly secure registration environment.
  • Broader Device Support: We are also actively exploring the potential addition of software-based keys to extend protections to devices without dedicated secure hardware.

Threat actors are using AI to supercharge tried-and-tested TTPs. When attacks move this fast, cyber-defenders need to rethink their own strategy.

Indirect prompt injection (IPI) is an evolving threat vector targeting users of complex AI applications with multiple data sources, such as Workspace with Gemini. This technique enables the attacker to influence the behavior of an LLM by injecting malicious instructions into the data or tools used by the LLM as it completes the user’s query. This may even be possible without any input directly from the user.

IPI is not the kind of technical problem you “solve” and move on. Sophisticated LLMs with increasing use of agentic automation combined with a wide range of content create an ultra-dynamic and evolving playground for adversarial attacks. That’s why Google takes a sophisticated and comprehensive approach to these attacks. We’re continuously improving LLM resistance to IPI attacks and launching AI application capabilities with ever-improving defenses. Staying ahead of the latest indirect prompt injection attacks is critical to our mission of securing Workspace with Gemini. 

In our previous blog “Mitigating prompt injection attacks with a layered defense strategy”, we reviewed the layered architecture of our IPI defenses. In this blog, we’ll share more detail on the continuous approach we take to improve these defenses and to solve for new attacks.

New attack discovery

By proactively discovering and cataloging new attack vectors through internal and external programs, we can identify vulnerabilities and deploy robust defenses ahead of adversarial activity. 

Human Red-Teaming

Human Red-Teaming uses adversarial simulations to uncover security and safety vulnerabilities. Specialized teams execute attacks based on realistic user profiles to exploit weaknesses, coordinating with product teams to resolve identified issues.

Automated Red-Teaming

Automated Red-Teaming is done via dynamic, machine-learning-driven frameworks to stress-test environments. By algorithmically generating and iterating on attack payloads, we can mimic the behavior of sophisticated threats at scale. This allows us to map complex attack paths and validate the effectiveness of our security controls across a much wider range of edge cases than manual testing could achieve on its own.

Google AI Vulnerability Rewards Program (VRP)

The Google AI Vulnerability Rewards Program (VRP) is a critical tool for enabling collaboration between Google and external security researchers who discover new attacks leveraging IPI. Through this VRP, we recognize and reward contributors for their research.  We also host regular, live hacking events where we provide invited researchers access to pre-release features, proactively uncovering novel vulnerabilities. These partnerships enable Google to quickly validate, reproduce, and resolve externally-discovered issues.

Publicly disclosed AI attacks 

Google utilizes open-source intelligence feeds to stay on top of the latest publicly disclosed IPI attacks, across social media, press releases, blogs, and more. From there, new AI vulnerabilities are sourced, reproduced, and catalogued internally to ensure our products are not impacted. 

Vulnerability catalog 

All newly discovered vulnerabilities go through a comprehensive analysis process performed by the Google Trust, Security, & Safety teams. Each new vulnerability is reproduced, checked for duplications, mapped into attack technique / impact category, and assigned to relevant owners. The combination of new attack discovery sources and vulnerability catalog process helps Google stay on top of the latest attacks in an actionable manner. 


Synthetic data generation 

After we discover, curate, and catalog new attacks, we use Simula to generate synthetic data expanding these new attacks. This process is essential because it allows the team to develop attack variants for completeness and coverage, and to prepare new training and validation data sets. This accelerated workflow has boosted synthetic data generation by 75%, supporting large-scale defense model evaluation and retraining, as well as updating the data set used for calculating and reporting on defense effectiveness.

Ongoing defense refinement 

Continually updating and enhancing our defense mechanisms allows us to address a broader range of attack techniques, effectively reducing the overall attack surface. Updating each defense type requires different tasks, from config updates, to prompt engineering and ML model retraining. 

Deterministic Defenses

Deterministic defenses, including user confirmation, URL sanitization, and tool chaining policies, are designed for rapid response against new or emerging prompt injection attacks by relying on simple configuration updates. These defenses are governed by a centralized Policy Engine, with configurations for policies like baseline tool calls, URL sanitization, and tool chaining. For immediate threats, this configuration-based system facilitates a streamlined process for “point fixes,” such as regex takedowns, providing an agile defense layer that acts faster than traditional ML/LLM model refresh cycles.

ML-Based Defenses

After generating synthetic data that expands new attacks into variants, the next step is to retrain our ML-based defenses to mitigate these new attacks. We partition the synthetic data described above into separate training and validation sets to ensure performance is evaluated against held-out examples. This approach ensures repeatability, data consistency for fixed training/testing, and establishes a scalable architecture to support future extensions towards fully automated model refresh.

LLM-Based Defenses

Using the new synthetic data examples, our LLM-based defenses go through prompt engineering with refined system instructions. The goal is to iteratively optimize these prompts against agreed-upon defense effectiveness metrics, ensuring the models remain resilient against evolving threat vectors.

Gemini Model Hardening 

Beyond system-level guardrails and application-level defenses, we prioritize ‘model hardening’, a process that improves the Gemini model’s internal capability to identify and ignore harmful instructions within data. By utilizing synthetic datasets and fresh attack patterns, we can model various threat iterations. This enables us to strengthen the Gemini model’s ability to disregard harmful embedded commands while following the user’s intended request. Through this process of model hardening, Gemini has become significantly more adept at detecting and disregarding injected instructions. This has led to a reduction in the success rate of attacks without compromising the model’s efficiency during routine operations.

Defense effectiveness 

To measure the real-world impact of defense improvements, we simulate attacks against many Workspace features. This process leverages the newly generated synthetic attack data described on this blog, to create a robust, end-to-end evaluation. The simulation is run against multiple Workspace apps, such as Gmail and Docs, using a standardized set of assets to ensure reliable results. To determine the exact impact of a defense improvement (e.g., an updated ML model or a new LLM prompt optimization), the end-to-end evaluation is run with and without the defense enabled. This comparative testing provides the essential “before and after” metrics needed to validate defense efficacy and drive continuous improvement.

Moving forward 

Our commitment to AI security is rooted in the principle that every day you’re safer with Google. While the threat landscape of indirect prompt injection evolves, we are building Workspace with Gemini to be a secure and trustworthy platform for AI-first work. IPI is a complex security challenge, which requires a defense-in-depth strategy and continuous mitigation approach. To get there, we’re combining world-class security research, automated pipelines, and advanced ML/LLM-based models. This robust and iterative framework helps to ensure we not only stay ahead of evolving threats but also provide a powerful, secure experience for both our users and customers.

Fraudsters often target the accounts of the deceased or their grieving relatives. Here’s how to keep the scammers at bay.

2025 marked a special year in the history of vulnerability rewards and bug bounty programs at Google: our 15th anniversary 🎉🎉🎉! Originally started in 2010, our vulnerability reward program (VRP) has seen constant additions and expansions over the past decade and a half, clearly indicating the value the programs under this umbrella contribute to the safety and security of Google and its users, but also highlighting their acceptance by the external research community, without which such programs cannot function.

Coming back to 2025 specifically, our VRP once again confirmed the ongoing value of engaging with the external security research community to make Google and its products safer. This was more evident than ever as we awarded over $17 million (an all-time high and more than 40% increase compared to 2024!) to over 700 researchers based in countries around the globe – across all of our programs.

Vulnerability Reward Program 2025 in Numbers

Want to learn more about who’s reporting to the VRP? Check out our Leaderboard on the Google Bug Hunters site.

VRP Highlights in 2025

In 2025 we made a series of changes and improvements to our VRP and related initiatives, and continued to invest in the security research community through a series of focused events:

  • The new, dedicated AI VRP was launched, underscoring the importance of this space to Google and its relevance for external researchers. Previously organized as a part of the Abuse VRP, moving into a dedicated VRP has gone hand in hand with improvements to the rules, offering researchers more clarity on scope and reward amounts.

  • Similarly, the Chrome VRP now also includes reward categories for problems found in AI features.

  • We launched a patch rewards program for OSV-SCALIBR, Google’s open source tool for finding vulnerabilities in software dependencies. Contributors are rewarded for providing novel OSV-SCALIBR plugins for inventory, vulnerability, or secret detection that expand the tool’s scanning capabilities. Besides strengthening the tool’s capabilities for all users, user submissions already helped us uncover and remediate a number of leaked secrets internally!

  • As part of Google’s Cybersecurity Awareness Month campaign in October, we hosted our very own security conference in Mexico City, ESCAL8. The conference included init.g(mexico), our cybersecurity workshop for students, HACKCELER8, Google’s CTF finals, and a Safer with Google seminar, sharing technical thought leadership with Mexican government officials. 

  • bugSWAT, our special invite-only live hacking event, saw several editions in 2025 and delivered some outstanding findings across different areas:

    • We hosted our first dedicated AI bugSWAT (Tokyo) in April which yielded a whopping 70+ reports filed and over $400,000 in rewards issued. 

    • We continued the momentum in early summer with Cloud bugSWAT (Sunnyvale) in June resulting in 130 reports, with $1,600,000 in rewards paid out.

    • Next in line was bugSWAT Las Vegas in August, leading to 77 reports and rewards of $380,000. 

    • And finally, as part of ESCAL8 in Mexico City, bugSWAT Mexico focused on many different targets and spaces including AI, Android, and Cloud, and resulted in the filing of 107 reports, totalling $566,000 in rewards to date. 

Looking for more details? See the extended version of this post on the Security Engineering blog for reports from individual VRPs such as Android, Abuse, AI, Cloud, Chrome, and OSS, including specifics concerning high-impact bug reports and focus areas of security research. 

What’s coming in 2026

In 2026, we remain fully committed to fostering collaboration, innovation, and transparency with the security community by hosting several bugSWAT events throughout the year, and following up with the next edition of our cybersecurity conference, ESCAL8. More broadly, our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services – all of which is only possible in collaboration with the external community of researchers we are so lucky to collaborate with! 

In this spirit, we’d like to extend a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe (check out our programs for inspiration 🙂)!

Thank you to Tony Mendez, Dirk Göhmann, Alissa Scherchen, Krzysztof Kotowicz, Martin Straka, Michael Cote, Sam Erb, Jason Parsons, Alex Gough, and Mihai Maruseac. 

Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions!

The past four weeks have seen a slew of new cybersecurity wake-up calls that showed why every organization needs a well-thought-out cyber-resilience plan

This year, AI agents took the center stage – as a defensive capability, but more pressingly as a risk many organizations haven’t caught up with

Silver Fox is back in Japan, spoofing tax and HR emails timed to the one season when no one thinks twice about opening them

Modern digital security is at a turning point. We are on the threshold of using quantum computers to solve “impossible” problems in drug discovery, materials science, and energy—tasks that even the most powerful classical supercomputers cannot handle. However, the same unique ability to consider different options simultaneously also allows these machines to bypass our current digital locks. This puts the public-key cryptography we’ve relied on for decades at risk, potentially compromising everything from bank transfers to trade secrets. To secure our future, it is vital to adopt the new Post-Quantum Cryptography (PQC) standards National Institute of Standards and Technology (NIST) is urging before large-scale, fault-tolerant quantum computers become a reality.

To stay ahead of the curve, the technology industry must undertake a proactive, multi-year migration to Post-Quantum Cryptography (PQC). We have been preparing for a post-quantum world since 2016, conducting pioneering experiments with post-quantum cryptography, rolling out post-quantum capabilities in our products, and sharing our expertise through threat models and technical papers. For Android, the objective extends beyond patching individual applications or transport protocols. The imperative is to ensure that the entire platform architecture is resilient for the decades to come.

We are beginning tests of PQC enhancements starting in the next Android 17 beta, followed by general availability in the Android 17 production release. This deployment introduces a comprehensive architectural upgrade that is being rolled out across the operating system. By integrating the recently finalized NIST PQC standards deep into the platform, we’re establishing a new, quantum-resistant chain of trust. This chain of trust secures the platform continuously—from the moment the OS powers on, to the execution of applications distributed globally. Android is swapping today’s digital locks for advanced encryption to help enhance the security of every app you download—no matter how powerful future supercomputers get.

Securing the foundation: Verified boot and hardware trust

Security on any computing device begins when the hardware starts; if the underlying operating system is compromised, all subsequent software protections fail. As quantum computing advances, adversaries could potentially forge digital signatures to bypass these foundational integrity checks. To secure the platform against this looming threat, Android 17 introduces two major post-quantum cryptographic (PQC) upgrades:

  1. Upgrading Android Verified Boot (AVB): The AVB library is integrating the Module-Lattice-Based Digital Signature Algorithm (ML-DSA). This provides quantum-resistant digital signatures, ensuring the software loaded during the boot sequence remains highly resistant to unauthorized modification.
  2. Migrating Remote Attestation: Android 17 begins the transition of Remote Attestation to a fully PQC-compliant architecture under the current standards. By updating KeyMint’s certificate chains to support quantum-resistant algorithms, devices can securely prove their state to relying parties, maintaining trust in a post-quantum environment.

Empowering developers: Android Keystore updates

Protecting the underlying operating system is only the first layer of defense; developers must be equipped with the cryptographic primitives necessary to leverage PQC keys and establish robust identity verification.

Implementing lattice-based cryptography, which requires significantly larger key sizes and memory footprints than classical elliptic curve cryptography, within the severely resource-constrained Trusted Execution Environment (TEE), represents a major engineering achievement. This capability is designed to support the hardware roots of trust and can now generate and verify post-quantum signatures.

Building on this hardware foundation, Android 17 updates Android Keystore to natively support ML-DSA. This allows applications to leverage quantum-safe signatures entirely within the device’s secure hardware, isolating sensitive key material from the main operating system. The SDK exposes both ML-DSA-65, and ML-DSA-87, enabling developers to seamlessly integrate these using the standard KeyPairGenerator API. This establishes a new era of identity and authentication for the app ecosystem without requiring developers to engineer proprietary cryptographic implementations.

Ecosystem scale: Bringing hybrid signing to Google Play apps and games

Android is committed to ensuring the platform is PQC resistant and extending the chain of PQC resistance to application signatures. The mechanisms used to verify the authenticity of applications are being upgraded to ensure that app installations and subsequent updates are strictly tamper-proof against quantum-enabled signature forgery. The platform will verify PQC signatures over APKs to enable this chain of trust.

To bring these critical protections to the wider developer community with minimal friction, the transition will be supported through Play App Signing. This approach provides an immediate bridge to quantum safety for the majority of active installs. Google Play will let developers automatically generate ‘hybrid’ signature blocks that combine classical and PQC keys.

Updating keys across billions of active devices is a complex operational endeavor. Play App Signing leverages Google Cloud KMS, which helps ensure industry-leading compliance standards, to secure signing keys. By managing signing keys securely in the cloud, Google Play enables developers to seamlessly upgrade their app security to PQC standards without the burden of complex, manual key management.

During the Android 17 release cycle, Google Play will handle the generation of quantum-safe ML-DSA signing keys for new apps and existing apps that opt-in, independent of the applications target API . Later, developers will be able to choose their own classical and ML-DSA signing keys and delegate them to Google Play for their hybrid key upgrade. To promote security best practices, Google Play will also start prompting developers to upgrade their signing keys at least every two years.

The cryptographic roadmap: From authenticity to privacy

Google’s post-quantum transition began in 2016, and Android 17 marks the first phase of Android’s post-quantum transition:

  • Securing the foundation: We are upholding the integrity of our attestation and Chain of Trust by incorporating ML-DSA into Android Verified Boot.
  • Empower Developers: The inclusion of ML-DSA support within Android Keystore and Play App Signing allows developers to safeguard their users and application.
  • Ecosystem Scale: By using hybrid signatures for APKs, developers can create a protected transition that preserves current trust while adding post-quantum defenses to block unauthorized updates.

Our roadmap further integrates post-quantum key encapsulation into KeyMint, Key Attestation and Remote Key Provisioning. This evolution is intended to bolster the security of the entire identity lifecycle—from hardware-level DICE measurements to our remote attestation servers—ensuring the Android ecosystem remains resilient and private against the quantum threats of tomorrow.

Cloud VMs offer unmatched speed, scale and flexibility – all of which could eventually count for little if they’re left to fend for themselves