Ignoring a real breach notification invites risk, but falling for a bogus one could be even worse. Stop reacting on autopilot.

Your biggest risk may be a vendor you trust. How can SMBs map their third-party blind spots and build operational resilience?

Google is continuously advancing the security of Pixel devices. We have been focusing on hardening the cellular baseband modem against exploitation. Recognizing the risks associated within the complex modem firmware, Pixel 9 shipped with mitigations against a range of memory-safety vulnerabilities. For Pixel 10, Google is advancing its proactive security measures further. Following our previous discussion on “Deploying Rust in Existing Firmware Codebases”, this post shares a concrete application: integrating a memory-safe Rust DNS(Domain Name System) parser into the modem firmware. The new Rust-based DNS parser significantly reduces our security risk by mitigating an entire class of vulnerabilities in a risky area, while also laying the foundation for broader adoption of memory-safe code in other areas.

Here we share our experience of working on it, and hope it can inspire the use of more memory safe languages in low-level environments.

Why Modem Memory Safety Can’t Wait

In recent years, we have seen increasing interest in the cellular modem from attackers and security researchers. For example, Google’s Project Zero gained remote code execution on Pixel modems over the Internet. Pixel modem has tens of Megabytes of executable code. Given the complexity and remote attack surface of the modem, other critical memory safety vulnerabilities may remain in the predominantly memory-unsafe firmware code.

Why DNS?

The DNS protocol is most commonly known in the context of browsers finding websites. With the evolution of cellular technology, modern cellular communications have migrated to digital data networks; consequently, even basic operations such as call forwarding rely on DNS services.

DNS is a complex protocol and requires parsing of untrusted data, which can lead to vulnerabilities, particularly when implemented in a memory-unsafe language (example: CVE-2024-27227). Implementing the DNS parser in Rust offers value by decreasing the attack surfaces associated with memory unsafety.

Picking a DNS library

DNS already has a level of support in the open-source Rust community. We evaluated multiple open source crates that implement DNS. Based on criteria shared in earlier posts, we identified hickory-proto as the best candidate. It has excellent maintenance, over 75% test coverage, and widespread adoption in the Rust community. Its pervasiveness shows its potential as the de-facto DNS choice and long term support. Although hickory-proto initially lacked no_std support, which is needed for Bare-metal environments (see our previous post on this topic), we were able to add support to it and its dependencies.

Adding no_std support

The work to enable no_std for hickory-proto is mostly mechanical. We shared the process in a previous post. We undertook modifications to hickory_proto and its dependencies to enable no_std support. The upstream no_std work also results in a no_std URL parser, beneficial to other projects.

The above PRs are great examples of how to extend no_std support to existing std-only crates.

Code size study

Code size is the one of the factors that we evaluated when picking the DNS library to use.

Code size
by category
Rust implemented Shim that calls Hickory-proto on receiving a DNS response 4KB
core, alloc, compiler_builtins
(reusable, one-time cost)
17KB
Hickory-proto library and dependencies 350KB


Sum 371KB

We built prototypes and measured size with size-optimized settings. Expectedly, hickory_proto is not designed with embedded use in mind, and is not optimized for size. As the Pixel modem is not tightly memory constrained, we prioritized community support and code quality, leaving code size optimizations as future work.

However, the additional code size may be a blocker for other embedded systems. This could be addressed in the future by adding additional feature flags to conditionally compile only required functionality. Implementing this modularity would be a valuable future work.

Hook-up Rust to modem firmware

Before building the Rust DNS library, we defined several Rust unit tests to cover basic arithmetic, dynamic allocations, and FFI to verify the integration of Rust with the existing modem firmware code base.

Compile Rust code to staticlib

While using cargo is the default choice for compilation in the Rust ecosystem, it presents challenges when integrating it into existing build systems. We evaluated two options:

  1. Using cargo to build a staticlib before the modem builds. Then add the produced staticlib into the linking step.
  2. Directly work with rustc and integrate the Rust compilation steps into the existing modem build system.

Option #1 does not scale if we are going to add more Rust components in the future, as linking multiple staticlibs may cause duplicated symbol errors. We chose option #2 as it scales more easily and allows tighter integration into our existing build system. Our existing C/C++ codebase uses Pigweed to drive the primary build system. Pigweed supports Rust targets (example) with direct calls to rustc through rust tools defined in GN.

We compiled all the Rust crates, including hickory-proto, its dependencies, and core, compiler_builtin, alloc, to rlib. Then, we created a staticlib target with a single lib.rs file which references all the rlib crates using extern crate keywords.

Build core, alloc, and compiler_builtins

Android’s Rust Toolchain distributes source code of core, alloc, and compiler_builtins, and we leveraged this for the modem. They can be included to the build graph by adding a GN target with crate_root pointing to the root lib.rs of each crate.

Pixel modem firmware already has a well-tested and specialized global memory allocation system to support some dynamic memory allocations. alloc support was added by implementing the GlobalAlloc with FFI calls to the allocators C APIs:

use core::alloc::{GlobalAlloc, Layout};

extern "C" {
    fn mem_malloc(size: usize, alignment: usize) -> *mut u8;
    fn mem_free(ptr: *mut u8, alignment: usize);
}

struct MemAllocator;

unsafe impl GlobalAlloc for MemAllocator {
    unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
        mem_malloc(layout.size(), layout.align())
    }

    unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) {
        mem_free(ptr, layout.align());
    }
}

#[global_allocator]
static ALLOCATOR: MemAllocator = MemAllocator;

Pixel modem firmware already implements a backend for the Pigweed crash facade as the global crash handler. Exposing it into Rust panic_handler through FFI unifies the crash handling for both Rust and C/C++ code.

#![no_std]
use core::panic::PanicInfo;

extern "C" {
    pub fn PwCrashBackend(sigature: *const i8, file_name: *const i8, line: u32);
}

#[panic_handler]
fn panic(panic_info: &PanicInfo) -> ! {
    let mut filename = "";
    let mut line_number: u32 = 0;

    if let Some(location) = panic_info.location() {
        filename = location.file();
        line_number = location.line();
    }

    let mut cstr_buffer = [0u8; 128];
    // Never writes to the last byte to make sure `cstr_buffer` is always zero
    // terminated.
    let (_, writer) = cstr_buffer.split_last_mut().unwrap();
    for (place, ch) in writer.iter_mut().zip(filename.bytes()) {
        *place = ch;
    }

    unsafe {
        PwCrashBackend(
            "Rust panic\0".as_ptr() as *const i8,
            cstr_buffer.as_ptr() as *const i8,
            line_number,
        );
    }

    loop {}
}

Link Rust staticlib

The Pixel modem firmware linking has a step that calls the linker to link all the objects generated from C/C++ code. By using llvm-ar -x to extract object files from the Rust combined staticlib and supplying them to the linker, the Rust code appears in the final modem image.

There was a performance issue we experienced due to weak symbols during linking. The inclusion of Rust core and compiler-builtin caused unexpected power and performance regressions on various tests. Upon analysis, we realized that modem optimized implementations of memset and memcpy provided by the modem firmware are accidentally replaced by those defined in compiler_builtin. It seems to happen because both compiler_builtin crate and the existing codebase defines symbols as weak, linker has no way to figure out which one is weaker. We fixed the regression by stripping the compiler_builtin crate before linking using a one line shell script.

llvm-ar -t <rust staticlib> | grep compiler_builtins | xargs llvm-ar -d <rust staticlib>

Integrating hickory-proto

Expose Rust API and calling back to C++

For the DNS parser, we declared the DNS response parsing API in C and then implemented the same API in Rust.

int32_t process_dns_response(uint8_t*, int32_t);

The Rust function returns an integer standing for the error code. The received DNS answers in the DNS response are required to be updated to in-memory data structures that are coupled with the original C implementation, therefore, we use existing C functions to do it. The existing C functions are dispatched from the Rust implementation.

pub unsafe extern "C" fn process_dns_response(
    dns_response: *const u8,
    response_len: i32,
) -> i32 {
    //... validate inputs `dns_response` and `response_len`.


    // SAFETY:
    // It is safe because `dns_response` is null checked above. `response_len`
    // is passed in, safe as long as it is set correctly by vendor code.
    match process_response(unsafe {
        slice::from_raw_parts(dns_response, response_len)
    }) {
         Ok(()) => 0,
         Err(err) => err.into(),
    }
}

fn process_response(response: &[u8]) -> Result<()> {
    let response = hickory_proto::op::Message::from_bytes(response)?;
    let response = hickory_proto::xfer::DnsResponse::from_message(response)?;

   
    for answer in response.answers() {  
        match answer.record_type() {
            hickory_proto::RecordType:... => {
                // SAFETY:
                // It is safe because the callback function does not store
                // reference of the inputs or their members.
                unsafe {
                    callback_to_c_function(...)?;
                }
            }
            
            // ... more match arms omitted.
        }    
    }

    Ok(())
}

In our case, the DNS responding parsing function API is simple enough for us to hand write, while the callbacks back to C functions for handling the response have complex data type conversions. Therefore, we leveraged bindgen to generate FFI code for the callbacks.

Build third-party crates

Even with all features disabled, hickory-proto introduces more than 30 dependent crates. Manually written build rules are difficult to ensure correctness and scale poorly when upgrading dependencies into new versions.

Fuchsia has developed cargo-gnaw to support building their third party Rust crates. Cargo-gnaw works by invoking cargo metadata to resolve dependencies, then parse and generate GN build rules. This ensures correctness and ease of maintenance.

Conclusion

The Pixel 10 series of phones marks a pivotal moment, being the first Pixel device to integrate a memory-safe language into its modem.

While replacing one piece of risky attack surface is itself valuable, this project lays the foundation for future integration of memory-safe parsers and code into the cellular baseband, ensuring the baseband’s security posture will continue to improve as development continues.

Special thanks to Armando Montanez, Bjorn Mellem, Boky Chen, Cheng-Yu Tsai, Dominik Maier, Erik Gilling, Ever Rosales, Hungyen Weng, Ivan Lozano, James Farrell, Jeffrey Vander Stoep, Jiacheng Lu, Jingjing Bu, Min Xu, Murphy Stein, Ray Weng, Shawn Yang, Sherk Chung, Stephan Chen, Stephen Hines.

If you’ve been the victim of fraud, you’re likely already a lead on a ‘sucker list’ – and if you’re not careful, your ordeal may be about to get worse.

Following our April 2024 announcement, Device Bound Session Credentials (DBSC) is now entering public availability for Windows users on Chrome 146, and expanding to macOS in an upcoming Chrome release. This project represents a significant step forward in our ongoing efforts to combat session theft, which remains a prevalent threat in the modern security landscape.

Session theft typically occurs when a user inadvertently downloads malware onto their device. Once active, the malware can silently extract existing session cookies from the browser or wait for the user to log in to new accounts, before exfiltrating these tokens to an attacker-controlled server. Infostealer malware families, such as LummaC2, have become increasingly sophisticated at harvesting these credentials. Because cookies often have extended lifetimes, attackers can use them to gain unauthorized access to a user’s accounts without ever needing their passwords; this access is then often bundled, traded, or sold among threat actors.

Crucially, once sophisticated malware has gained access to a machine, it can read the local files and memory where browsers store authentication cookies. As a result, there is no reliable way to prevent cookie exfiltration using software alone on any operating system. Historically, mitigating session theft relied on detecting the stolen credentials after the fact using a complex set of abuse heuristics – a reactive approach that persistent attackers could often circumvent. DBSC fundamentally changes the web’s capability to defend against this threat by shifting the paradigm from reactive detection to proactive prevention, ensuring that successfully exfiltrated cookies cannot be used to access users’ accounts.

How DBSC Works

DBSC protects against session theft by cryptographically binding authentication sessions to a specific device. It does this using hardware-backed security modules, such as the Trusted Platform Module (TPM) on Windows and the Secure Enclave on macOS, to generate a unique public/private key pair that cannot be exported from the machine. The issuance of new short-lived session cookies is contingent upon Chrome proving possession of the corresponding private key to the server. Because attackers cannot steal this key, any exfiltrated cookies quickly expire and become useless to those attackers. This design allows large and small websites to upgrade to secure, hardware-bound sessions by adding dedicated registration and refresh endpoints to their backends, while maintaining complete compatibility with their existing front-end. The browser handles the complex cryptography and cookie rotation in the background, allowing the web app to continue using standard cookies for access just as it always has.

Google rolled out an early version of this protocol over the last year. For sessions protected by DBSC, we have observed a significant reduction in session theft since its launch.

An overview of the DBSC protocol showing the interaction between the browser and server.

Private by design

A core tenet of the DBSC architecture is the preservation of user privacy. Each session is backed by a distinct key, preventing websites from using these credentials to correlate a user’s activity across different sessions or sites on the same device. Furthermore, the protocol is designed to be lean: it does not leak device identifiers or attestation data to the server beyond the per-session public key required to certify proof of possession. This minimal information exchange ensures DBSC helps secure sessions without enabling cross-site tracking or acting as a device fingerprinting mechanism.

Engagement with the ecosystem

DBSC was designed from the beginning to be an open web standard through the W3C process and adoption by the Web Application Security Working Group. Through this process we partnered with Microsoft to design the standard to ensure it works for the web and got input from many in the industry that are responsible for web security.

Additionally, over the past year, we have also conducted two Origin Trials to ensure DBSC effectively serves the requirements of the broader web community. Many web platforms, including Okta, actively participated in these trials and their own testing and provided essential feedback to ensure the protocol effectively addresses their diverse needs.

If you are a web developer and are looking for a way to secure your users against session theft, refer to our developer guide for implementation details. Additionally, all the details about DBSC can be found on the spec and the corresponding github. Feel free to use the issues page to report bugs or provide feature requests.

Future improvements

As we continue to evolve the DBSC standard, future iterations will focus on increasing support across diverse ecosystems and introducing advanced capabilities tailored for complex enterprise environments. Key areas of ongoing development include:

  • Securing Federated Identity: In modern enterprise environments, Single Sign-On (SSO) is ubiquitous. We are expanding the DBSC protocol to support cross-origin bindings, ensuring that a relying party (RP) session remains continuously bound to the same original device key used by the Identity Provider (IdP). This guarantees that the high-assurance security of the initial device binding is maintained throughout the entire federated login process, creating an unbroken chain of trust.
  • Advanced Registration Capabilities: While DBSC provides robust protection for established cookies, some environments require an even stronger foundation when the session is first created. We are developing mechanisms to bind DBSC sessions to pre-existing, trusted key material rather than generating a new key at sign-in. This advanced capability enables websites to integrate complementary technologies, such as mTLS certificates or hardware security keys, creating a highly secure registration environment.
  • Broader Device Support: We are also actively exploring the potential addition of software-based keys to extend protections to devices without dedicated secure hardware.

Threat actors are using AI to supercharge tried-and-tested TTPs. When attacks move this fast, cyber-defenders need to rethink their own strategy.

Indirect prompt injection (IPI) is an evolving threat vector targeting users of complex AI applications with multiple data sources, such as Workspace with Gemini. This technique enables the attacker to influence the behavior of an LLM by injecting malicious instructions into the data or tools used by the LLM as it completes the user’s query. This may even be possible without any input directly from the user.

IPI is not the kind of technical problem you “solve” and move on. Sophisticated LLMs with increasing use of agentic automation combined with a wide range of content create an ultra-dynamic and evolving playground for adversarial attacks. That’s why Google takes a sophisticated and comprehensive approach to these attacks. We’re continuously improving LLM resistance to IPI attacks and launching AI application capabilities with ever-improving defenses. Staying ahead of the latest indirect prompt injection attacks is critical to our mission of securing Workspace with Gemini. 

In our previous blog “Mitigating prompt injection attacks with a layered defense strategy”, we reviewed the layered architecture of our IPI defenses. In this blog, we’ll share more detail on the continuous approach we take to improve these defenses and to solve for new attacks.

New attack discovery

By proactively discovering and cataloging new attack vectors through internal and external programs, we can identify vulnerabilities and deploy robust defenses ahead of adversarial activity. 

Human Red-Teaming

Human Red-Teaming uses adversarial simulations to uncover security and safety vulnerabilities. Specialized teams execute attacks based on realistic user profiles to exploit weaknesses, coordinating with product teams to resolve identified issues.

Automated Red-Teaming

Automated Red-Teaming is done via dynamic, machine-learning-driven frameworks to stress-test environments. By algorithmically generating and iterating on attack payloads, we can mimic the behavior of sophisticated threats at scale. This allows us to map complex attack paths and validate the effectiveness of our security controls across a much wider range of edge cases than manual testing could achieve on its own.

Google AI Vulnerability Rewards Program (VRP)

The Google AI Vulnerability Rewards Program (VRP) is a critical tool for enabling collaboration between Google and external security researchers who discover new attacks leveraging IPI. Through this VRP, we recognize and reward contributors for their research.  We also host regular, live hacking events where we provide invited researchers access to pre-release features, proactively uncovering novel vulnerabilities. These partnerships enable Google to quickly validate, reproduce, and resolve externally-discovered issues.

Publicly disclosed AI attacks 

Google utilizes open-source intelligence feeds to stay on top of the latest publicly disclosed IPI attacks, across social media, press releases, blogs, and more. From there, new AI vulnerabilities are sourced, reproduced, and catalogued internally to ensure our products are not impacted. 

Vulnerability catalog 

All newly discovered vulnerabilities go through a comprehensive analysis process performed by the Google Trust, Security, & Safety teams. Each new vulnerability is reproduced, checked for duplications, mapped into attack technique / impact category, and assigned to relevant owners. The combination of new attack discovery sources and vulnerability catalog process helps Google stay on top of the latest attacks in an actionable manner. 


Synthetic data generation 

After we discover, curate, and catalog new attacks, we use Simula to generate synthetic data expanding these new attacks. This process is essential because it allows the team to develop attack variants for completeness and coverage, and to prepare new training and validation data sets. This accelerated workflow has boosted synthetic data generation by 75%, supporting large-scale defense model evaluation and retraining, as well as updating the data set used for calculating and reporting on defense effectiveness.

Ongoing defense refinement 

Continually updating and enhancing our defense mechanisms allows us to address a broader range of attack techniques, effectively reducing the overall attack surface. Updating each defense type requires different tasks, from config updates, to prompt engineering and ML model retraining. 

Deterministic Defenses

Deterministic defenses, including user confirmation, URL sanitization, and tool chaining policies, are designed for rapid response against new or emerging prompt injection attacks by relying on simple configuration updates. These defenses are governed by a centralized Policy Engine, with configurations for policies like baseline tool calls, URL sanitization, and tool chaining. For immediate threats, this configuration-based system facilitates a streamlined process for “point fixes,” such as regex takedowns, providing an agile defense layer that acts faster than traditional ML/LLM model refresh cycles.

ML-Based Defenses

After generating synthetic data that expands new attacks into variants, the next step is to retrain our ML-based defenses to mitigate these new attacks. We partition the synthetic data described above into separate training and validation sets to ensure performance is evaluated against held-out examples. This approach ensures repeatability, data consistency for fixed training/testing, and establishes a scalable architecture to support future extensions towards fully automated model refresh.

LLM-Based Defenses

Using the new synthetic data examples, our LLM-based defenses go through prompt engineering with refined system instructions. The goal is to iteratively optimize these prompts against agreed-upon defense effectiveness metrics, ensuring the models remain resilient against evolving threat vectors.

Gemini Model Hardening 

Beyond system-level guardrails and application-level defenses, we prioritize ‘model hardening’, a process that improves the Gemini model’s internal capability to identify and ignore harmful instructions within data. By utilizing synthetic datasets and fresh attack patterns, we can model various threat iterations. This enables us to strengthen the Gemini model’s ability to disregard harmful embedded commands while following the user’s intended request. Through this process of model hardening, Gemini has become significantly more adept at detecting and disregarding injected instructions. This has led to a reduction in the success rate of attacks without compromising the model’s efficiency during routine operations.

Defense effectiveness 

To measure the real-world impact of defense improvements, we simulate attacks against many Workspace features. This process leverages the newly generated synthetic attack data described on this blog, to create a robust, end-to-end evaluation. The simulation is run against multiple Workspace apps, such as Gmail and Docs, using a standardized set of assets to ensure reliable results. To determine the exact impact of a defense improvement (e.g., an updated ML model or a new LLM prompt optimization), the end-to-end evaluation is run with and without the defense enabled. This comparative testing provides the essential “before and after” metrics needed to validate defense efficacy and drive continuous improvement.

Moving forward 

Our commitment to AI security is rooted in the principle that every day you’re safer with Google. While the threat landscape of indirect prompt injection evolves, we are building Workspace with Gemini to be a secure and trustworthy platform for AI-first work. IPI is a complex security challenge, which requires a defense-in-depth strategy and continuous mitigation approach. To get there, we’re combining world-class security research, automated pipelines, and advanced ML/LLM-based models. This robust and iterative framework helps to ensure we not only stay ahead of evolving threats but also provide a powerful, secure experience for both our users and customers.

Fraudsters often target the accounts of the deceased or their grieving relatives. Here’s how to keep the scammers at bay.

2025 marked a special year in the history of vulnerability rewards and bug bounty programs at Google: our 15th anniversary 🎉🎉🎉! Originally started in 2010, our vulnerability reward program (VRP) has seen constant additions and expansions over the past decade and a half, clearly indicating the value the programs under this umbrella contribute to the safety and security of Google and its users, but also highlighting their acceptance by the external research community, without which such programs cannot function.

Coming back to 2025 specifically, our VRP once again confirmed the ongoing value of engaging with the external security research community to make Google and its products safer. This was more evident than ever as we awarded over $17 million (an all-time high and more than 40% increase compared to 2024!) to over 700 researchers based in countries around the globe – across all of our programs.

Vulnerability Reward Program 2025 in Numbers

Want to learn more about who’s reporting to the VRP? Check out our Leaderboard on the Google Bug Hunters site.

VRP Highlights in 2025

In 2025 we made a series of changes and improvements to our VRP and related initiatives, and continued to invest in the security research community through a series of focused events:

  • The new, dedicated AI VRP was launched, underscoring the importance of this space to Google and its relevance for external researchers. Previously organized as a part of the Abuse VRP, moving into a dedicated VRP has gone hand in hand with improvements to the rules, offering researchers more clarity on scope and reward amounts.

  • Similarly, the Chrome VRP now also includes reward categories for problems found in AI features.

  • We launched a patch rewards program for OSV-SCALIBR, Google’s open source tool for finding vulnerabilities in software dependencies. Contributors are rewarded for providing novel OSV-SCALIBR plugins for inventory, vulnerability, or secret detection that expand the tool’s scanning capabilities. Besides strengthening the tool’s capabilities for all users, user submissions already helped us uncover and remediate a number of leaked secrets internally!

  • As part of Google’s Cybersecurity Awareness Month campaign in October, we hosted our very own security conference in Mexico City, ESCAL8. The conference included init.g(mexico), our cybersecurity workshop for students, HACKCELER8, Google’s CTF finals, and a Safer with Google seminar, sharing technical thought leadership with Mexican government officials. 

  • bugSWAT, our special invite-only live hacking event, saw several editions in 2025 and delivered some outstanding findings across different areas:

    • We hosted our first dedicated AI bugSWAT (Tokyo) in April which yielded a whopping 70+ reports filed and over $400,000 in rewards issued. 

    • We continued the momentum in early summer with Cloud bugSWAT (Sunnyvale) in June resulting in 130 reports, with $1,600,000 in rewards paid out.

    • Next in line was bugSWAT Las Vegas in August, leading to 77 reports and rewards of $380,000. 

    • And finally, as part of ESCAL8 in Mexico City, bugSWAT Mexico focused on many different targets and spaces including AI, Android, and Cloud, and resulted in the filing of 107 reports, totalling $566,000 in rewards to date. 

Looking for more details? See the extended version of this post on the Security Engineering blog for reports from individual VRPs such as Android, Abuse, AI, Cloud, Chrome, and OSS, including specifics concerning high-impact bug reports and focus areas of security research. 

What’s coming in 2026

In 2026, we remain fully committed to fostering collaboration, innovation, and transparency with the security community by hosting several bugSWAT events throughout the year, and following up with the next edition of our cybersecurity conference, ESCAL8. More broadly, our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services – all of which is only possible in collaboration with the external community of researchers we are so lucky to collaborate with! 

In this spirit, we’d like to extend a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe (check out our programs for inspiration 🙂)!

Thank you to Tony Mendez, Dirk Göhmann, Alissa Scherchen, Krzysztof Kotowicz, Martin Straka, Michael Cote, Sam Erb, Jason Parsons, Alex Gough, and Mihai Maruseac. 

Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions!

The past four weeks have seen a slew of new cybersecurity wake-up calls that showed why every organization needs a well-thought-out cyber-resilience plan