If you don’t look inside your environment, you can’t know its true state – and attackers count on that

Secure connections are the backbone of the modern web, but a certificate is only as trustworthy as the validation process and issuance practices behind it. Recently, the Chrome Root Program and the CA/Browser Forum have taken decisive steps toward a more secure internet by adopting new security requirements for HTTPS certificate issuers.

These initiatives, driven by Ballots SC-080, SC-090, and SC-091, will sunset 11 legacy methods for Domain Control Validation. By retiring these outdated practices, which rely on weaker verification signals like physical mail, phone calls, or emails, we are closing potential loopholes for attackers and pushing the ecosystem toward automated, cryptographically verifiable security.

To allow affected website operators to transition smoothly, the deprecation will be phased in, with its full security value realized by March 2028.

This effort is a key part of our public roadmap, “Moving Forward, Together,” launched in 2022. Our vision is to improve security by modernizing infrastructure and promoting agility through automation. While “Moving Forward, Together” sets the aspirational direction, the recent updates to the TLS Baseline Requirements turn that vision into policy. This builds on our momentum from earlier this year, including the successful advocacy for the adoption of other security enhancing initiatives as industry-wide standards.

What’s Domain Control Validation?

Domain Control Validation is a security-critical process designed to ensure certificates are only issued to the legitimate domain operator. This prevents unauthorized entities from obtaining a certificate for a domain they do not control. Without this check, an attacker could obtain a valid certificate for a legitimate website and use it to impersonate that site or intercept web traffic.

Before issuing a certificate, a Certification Authority (CA) must verify that the requestor legitimately controls the domain. Most modern validation relies on “challenge-response” mechanisms, for example, a CA might provide a random value for the requestor to place in a specific location, like a DNS TXT record, which the CA then verifies.

Historically, other methods validated control through indirect means, such as looking up contact information in WHOIS records or sending an email to a domain contact. These methods have been proven vulnerable (example) and the recent efforts retire these weaker checks in favor of robust, automated alternatives.

Raising the floor of security

The recently passed CA/Browser Forum Server Certificate Working Group Ballots introduce a phased sunset of the following Domain Control Validation methods. Alternative existing methods offer stronger security assurances against attackers trying to obtain fraudulent certificates – and the alternative methods are getting stronger over time, too.

Sunsetted methods relying on email:

Sunsetted methods relying on phone:

Sunsetted method relying on a reverse lookup:

For everyday users, these changes are invisible – and that’s the point. But, behind the scenes, they make it harder for attackers to trick a CA into issuing a certificate for a domain they don’t control. This reduces the risk that stale or indirect signals, (like outdated WHOIS data, complex phone and email ecosystems, or inherited infrastructure) can be abused. These changes push the ecosystem toward standardized (e.g., ACME), modern, and auditable Domain Control Validation methods. They increase agility and resilience by encouraging site owners to transition to modern Domain Control Validation methods, creating opportunities for faster and more efficient certificate lifecycle management through automation.

These initiatives remove weak links in how trust is established on the internet. That leads to a safer browsing experience for everyone, not just users of a single browser, platform, or website.

Interpreting the vast cybersecurity vendor landscape through the lens of industry analysts and testing authorities can immensely enhance your cyber-resilience.

Last year, Google’s Android Red Team partnered with Arm to conduct an in-depth security analysis of the Mali GPU, a component used in billions of Android devices worldwide. This collaboration was a significant step in proactively identifying and fixing vulnerabilities in the GPU software and firmware stack.

While finding and fixing individual bugs is crucial, and progress continues on eliminating them entirely, making them unreachable by restricting attack surface is another effective and often faster way to improve security. This post details our efforts in partnership with Arm to further harden the GPU by reducing the driver’s attack surface.

The Growing Threat: Why GPU Security Matters

The Graphics Processing Unit (GPU) has become a critical and attractive target for attackers due to its complexity and privileged access to the system. The scale of this threat is significant: since 2021, the majority of Android kernel driver-based exploits have targeted the GPU. These exploits primarily target the interface between the User-Mode Driver (UMD) and the highly privileged Kernel-Mode Driver (KMD), where flaws can be exploited by malicious input to trigger memory corruption.

Partnership with Arm

Our goal is to raise the bar on GPU security, ensuring the Mali GPU driver and firmware remain highly resilient against potential threats. We partnered with Arm to conduct an analysis of the Mali driver, used on approximately 45% of Android devices. This collaboration was crucial for understanding the driver’s attack surface and identifying areas that posed a security risk, but were not necessary for production use.

The Right Tool for the Job: Hardening with SELinux

One of the key findings of our investigation was the opportunity to restrict access to certain GPU IOCTLs. IOCTLs act as the GPU kernel driver’s user input and output, as well as the attack surface. This approach builds on earlier kernel hardening efforts, such as those described in the 2016 post Protecting Android with More Linux Security. Mali ioctls can be broadly categorized as:

  • Unprivileged: Necessary for normal operation.
  • Instrumentation: Used by developers for profiling and debugging.
  • Restricted: Should not be used by applications in production. This includes IOCTLs which are intended only for GPU development, as well as IOCTLs which have been deprecated and are no longer used by a device’s current User-Mode Driver (UMD) version.

Our goal is to block access to deprecated and debug IOCTLs in production. Instrumentation IOCTLs are intended for use by profiling tools to monitor system GPU performance and are not intended to be directly used by applications in production. As such, access is restricted to shell or applications marked as debuggable. Production IOCTLs remain accessible to regular applications.

A Staged Rollout

The approach is iterative and is a staged rollout for devices using the Mali GPU. This way, we were able to carefully monitor real-world usage and collect data to validate the policy, minimizing the risk of breaking legitimate applications before moving to broader adoption:

  1. Opt-In Policy: We started with an “opt-in” policy. We created a new SELinux attribute, gpu_harden, that disallowed instrumentation ioctls. We then selectively applied this attribute to certain system apps to test the impact. We used the allowxperm rule to audit, but not deny, access to the intended resource, and monitored the denial logs to ensure no breakage.
  2. Opt-Out Policy: Once we were confident that our approach was sound, we moved to an “opt-out” policy. We created a gpu_debug domain that would allow access to instrumentation ioctls. All applications were hardened by default, but developers could opt-out by:
    • Running on a rooted device.
    • Setting the android:debuggable="true" attribute in their app’s manifest.
    • Requesting a permanent exception in the SELinux policy for their application.

This approach allowed us to roll out the new security policy broadly while minimizing the impact on developers.

Step by Step instructions on how to add your Sepolicy

To help our partners and the broader ecosystem adopt similar hardening measures, this section provides a practical, step-by-step guide for implementing a robust SELinux policy to filter GPU ioctls. This example is based on the policy we implemented for the Mali GPU on Android devices.

The core principle is to create a flexible, platform-level macro that allows each device to define its own specific lists of GPU ioctl commands to be restricted. This approach separates the general policy logic from the device-specific implementation.

Official documentation detailing the added macro and GPU security policy is available at:

SELinux Hardening Macro: GPU Syscall Filtering

Android Security Change: Android 16 Behavior Changes

Step 1: Utilize the Platform-Level Hardening Macro

The first step is to use a generic macro that we built in the platform’s system/sepolicy that can be used by any device. This macro establishes the framework for filtering different categories of ioctls.

In the file/sepolicy/public/te_macros, a new macro is created. This macro allows device-specific policies to supply their own lists of ioctls to be filtered. The macro is designed to:

  • Allow all applications (appdomain) access to a defined list of unprivileged ioctls.
  • Restrict access to sensitive “instrumentation” ioctls, only permitting them for debugging tools like shell or runas_app when the application is debuggable.
  • Block access to privileged ioctls based on the application’s target SDK version, maintaining compatibility for older applications.

Step 2: Define Device-Specific IOCTL Lists

With the platform macro in place, you can now create a device-specific implementation. This involves defining the exact ioctl commands used by your particular GPU driver.

  1. Create an ioctl_macros file in your device’s sepolicy directory (e.g., device/your_company/your_device/sepolicy/ioctl_macros).
  2. Define the ioctl lists inside this file, categorizing them as needed. Based on our analysis, we recommend at least mali_production_ioctls, mali_instrumentation_ioctls, and mali_debug_ioctls. These lists will contain the hexadecimal ioctl numbers specific to your driver.

    For example, you can define your IOCTL lists as follows:

    define(`unpriv_gpu_ioctls', `0x0000, 0x0001, 0x0002')
    define(`restricted_ioctls', `0x1110, 0x1111, 0x1112')
    define(`instrumentation_gpu_ioctls', `0x2220, 0x2221, 0x2222')

Arm has provided official categorization of their IOCTLs in Documentation/ioctl-categories.rst of their r54p2 release. This list will continue to be maintained in future driver releases.

Step 3: Apply the Policy to the GPU Device

Now, you apply the policy to the GPU device node using the macro you created.

  1. Create a gpu.te file in your device’s sepolicy directory.
  2. Call the platform macro from within this file, passing in the device label and the ioctl lists you just defined.

Step 4: Test, Refine, and Enforce

As with any SELinux policy development, the process should be iterative. This iterative process is consistent with best practices for SELinux policy development outlined in the Android Open Source Project documentation.

Conclusion

Attack surface reduction is an effective approach to security hardening, rendering vulnerabilities unreachable. This technique is particularly effective because it provides users strong protection against existing but also not-yet-discovered vulnerabilities, and vulnerabilities that might be introduced in the future. This effort spans across Android and Android OEMs, and required close collaboration with Arm. The Android security team is committed to collaborating with ecosystem partners to drive broader adoption of this approach to help harden the GPU.

Acknowledgments

Thank you to Jeffrey Vander Stoep for his valuable suggestions and extensive feedback on this post.

Is your organization’s senior leadership vulnerable to a cyber-harpooning? Learn how to keep them safe.

Chrome has been advancing the web’s security for well over 15 years, and we’re committed to meeting new challenges and opportunities with AI. Billions of people trust Chrome to keep them safe by default, and this is a responsibility we take seriously. Following the recent launch of Gemini in Chrome and the preview of agentic capabilities, we want to share our approach and some new innovations to improve the safety of agentic browsing.

The primary new threat facing all agentic browsers is indirect prompt injection. It can appear in malicious sites, third-party content in iframes, or from user-generated content like user reviews, and can cause the agent to take unwanted actions such as initiating financial transactions or exfiltrating sensitive data. Given this open challenge, we are investing in a layered defense that includes both deterministic and probabilistic defenses to make it difficult and costly for attackers to cause harm.

Designing safe agentic browsing for Chrome has involved deep collaboration of security experts across Google. We built on Gemini’s existing protections and agent security principles and have implemented several new layers for Chrome.

We’re introducing a user alignment critic where the agent’s actions are vetted by a separate model that is isolated from untrusted content. We’re also extending Chrome’s origin-isolation capabilities to constrain what origins the agent can interact with, to just those that are relevant to the task. Our layered defense also includes user confirmations for critical steps, real-time detection of threats, and red-teaming and response. We’ll step through these layers below.

Checking agent outputs with User Alignment Critic

The main planning model for Gemini uses page content shared in Chrome to decide what action to take next. Exposure to untrusted web content means it is inherently vulnerable to indirect prompt injection. We use techniques like spotlighting that direct the model to strongly prefer following user and system instructions over what’s on the page, and we’ve upstreamed known attacks to train the Gemini model to avoid falling for them.

To further bolster model alignment beyond spotlighting, we’re introducing the User Alignment Critic — a separate model built with Gemini that acts as a high-trust system component. This architecture is inspired partially by the dual-LLM pattern as well as CaMeL research from Google DeepMind.

A flow chart that depicts the User Alignment Critic: a trusted component that vets each action before it reaches the browser.

The User Alignment Critic runs after the planning is complete to double-check each proposed action. Its primary focus is task alignment: determining whether the proposed action serves the user’s stated goal. If the action is misaligned, the Alignment Critic will veto it. This component is architected to see only metadata about the proposed action and not any unfiltered untrustworthy web content, thus ensuring it cannot be poisoned directly from the web. It has less context, but it also has a simpler job — just approve or reject an action.

This is a powerful, extra layer of defense against both goal-hijacking and data exfiltration within the action step. When an action is rejected, the Critic provides feedback to the planning model to re-formulate its plan, and the planner can return control to the user if there are repeated failures.

Enforcing stronger security boundaries with Origin Sets

Site Isolation and the same-origin policy are fundamental boundaries in Chrome’s security model and we’re carrying forward these concepts into the agentic world. By their nature, agents must operate across websites (e.g. collecting ingredients on one site and filling a shopping cart on another). But if an unrestricted agent is compromised and can interact with arbitrary sites, it can create what is effectively a Site Isolation bypass. That can have a severe impact when the agent operates on a local browser like Chrome, with logged-in sites vulnerable to data exfiltration. To address this, we’re extending those principles with Agent Origin Sets. Our design architecturally limits the agent to only access data from origins that are related to the task at hand, or data that the user has chosen to share with the agent. This prevents a compromised agent from acting arbitrarily on unrelated origins.

For each task on the web, a trustworthy gating function decides which origins proposed by the planner are relevant to the task. The design is to separate these into two sets, tracked for each session:

  • Read-only origins are those from which Gemini is permitted to consume content. If an iframe’s origin isn’t on the list, the model will not see that content.
  • Read-writable origins are those on which the agent is allowed to actuate (e.g., click, type) in addition to reading from.

This delineation enforces that only data from a limited set of origins is available to the agent, and this data can only be passed on to the writable origins. This bounds the threat vector of cross-origin data leaks. This also gives the browser the ability to enforce some of that separation, such as by not even sending to the model data that is outside the readable set. This reduces the model’s exposure to unnecessary cross-site data. Like the Alignment Critic, the gating functions that calculate these origin sets are not exposed to untrusted web content. The planner can also use context from pages the user explicitly shared in that session, but it cannot add new origins without the gating function’s approval. Outside of web origins, the planning model may ingest other non-web content such as from tool calls, so we also delineate those into read-vs-write calls and similarly check that those calls are appropriate for the task.

Iframes from origins that aren’t related to the user’s task are not shown to the model.

Page navigations can happen in several ways: If the planner decides to navigate to a new origin that isn’t yet in the readable set, that origin is checked for relevancy by a variant of the User Alignment critic before Chrome adds it and starts the navigation. And since model-generated URLs could exfiltrate private information, we have a deterministic check to restrict them to known, public URLs. If a page in Chrome navigates on its own to a new origin, it’ll get vetted by the same critic.

Getting the balance right on the first iteration is hard without seeing how users’ tasks interact with these guardrails. We’ve initially implemented a simpler version of origin gating that just tracks the read-writeable set. We will tune the gating functions and other aspects of this system to reduce unnecessary friction while improving security. We think this architecture will provide a powerful security primitive that can be audited and reasoned about within the client, as it provides guardrails against cross-origin sensitive data exfiltration and unwanted actions.

Transparency and control for sensitive actions

We designed the agentic capabilities in Chrome to give the user both transparency and control when they need it most. As the agent works in a tab, it details each step in a work log, allowing the user to observe the agent’s actions as they happen. The user can pause to take over or stop a task at any time.

This transparency is paired with several layers of deterministic and model-based checks to trigger user confirmations before the agent takes an impactful action. These serve as guardrails against both model mistakes and adversarial input by putting the user in the loop at key moments.

First, the agent will require a user confirmation before it navigates to certain sensitive sites, such as those dealing with banking transactions or personal medical information. This is based on a deterministic check against a list of sensitive sites. Second, it’ll confirm before allowing Chrome to sign-in to a site via Google Password Manager – the model does not have direct access to stored passwords. Lastly, before any sensitive web actions like completing a purchase or payment, sending messages, or other consequential actions, the agent will try to pause and either get permission from the user before proceeding or ask the user to complete the next step. Like our other safety classifiers, we’re constantly working to improve the accuracy to catch edge cases and grey areas.

Illustrative example of when the agent gets to a payment page, it stops and asks the user to complete the final step.

Detecting “social engineering” of agents

In addition to the structural defenses of alignment checks, origin gating, and confirmations, we have several processes to detect and respond to threats. While the agent is active, it checks every page it sees for indirect prompt injection. This is in addition to Chrome’s real-time scanning with Safe Browsing and on-device AI that detect more traditional scams. This prompt-injection classifier runs in parallel to the planning model’s inference, and will prevent actions from being taken based on content that the classifier determined has intentionally targeted the model to do something unaligned with the user’s goal. While it cannot flag everything that might influence the model with malicious intent, it is a valuable layer in our defense-in-depth.

Continuous auditing, monitoring, response

To validate the security of this set of layered defenses, we’ve built automated red-teaming systems to generate malicious sandboxed sites that try to derail the agent in Chrome. We start with a set of diverse attacks crafted by security researchers, and expand on them using LLMs following a technique we adapted for browser agents. Our continuous testing prioritizes defenses against broad-reach vectors such as user-generated content on social media sites and content delivered via ads. We also prioritize attacks that could lead to lasting harm, such as financial transactions or the leaking of sensitive credentials. The attack success rate across these give immediate feedback to any engineering changes we make, so we can prevent regressions and target improvements. Chrome’s auto-update capabilities allow us to get fixes out to users very quickly, so we can stay ahead of attackers.

Collaborating across the community

We have a long-standing commitment to working with the broader security research community to advance security together, and this includes agentic safety. We’ve updated our Vulnerability Rewards Program (VRP) guidelines to clarify how external researchers can focus on agentic capabilities in Chrome. We want to hear about any serious vulnerabilities in this system, and will pay up to $20,000 for those that demonstrate breaches in the security boundaries. The full details are available in VRP rules.

Looking forward

The upcoming introduction of agentic capabilities in Chrome brings new demands for browser security, and we’ve approached this challenge with the same rigor that has defined Chrome’s security model from its inception. By extending some core principles like origin-isolation and layered defenses, and introducing a trusted-model architecture, we’re building a secure foundation for Gemini’s agentic experiences in Chrome. This is an evolving space, and while we’re proud of the initial protections we’ve implemented, we recognize that security for web agents is still an emerging domain. We remain committed to continuous innovation and collaboration with the security community to ensure Chrome users can explore this new era of the web safely.

Identity is effectively the new network boundary. It must be protected at all costs.

Android uses the best of Google AI and our advanced security expertise to tackle mobile scams from every angle. Over the last few years, we’ve launched industry-leading features to detect scams and protect users across phone calls, text messages and messaging app chat notifications.

These efforts are making a real difference in the lives of Android users. According to a recent YouGov survey1 commissioned by Google, Android users were 58% more likely than iOS users to report they had not received any scam texts in the prior week2.

But our work doesn’t stop there. Scammers are continuously evolving, using more sophisticated social engineering tactics to trick users into sharing their phone screen while on the phone to visit malicious websites, reveal sensitive information, send funds or download harmful apps. One popular scam involves criminals impersonating banks or other trusted institutions on the phone to try to manipulate victims into sharing their screen in order to reveal banking information or make a financial transfer.

To help combat these types of financial scams, we launched a pilot earlier this year in the UK focused on in-call protections for financial apps.

How the in-call scam protection works on Android

When you launch a participating financial app while screen sharing and on a phone call with a number that is not saved in your contacts, your Android device3 will automatically warn you about the potential dangers and give you the option to end the call and to stop screen sharing with just one tap. The warning includes a 30-second pause period before you’re able to continue, which helps break the ‘spell’ of the scammer’s social engineering, disrupting the false sense of urgency and panic commonly used to manipulate you into a scam.

Bringing in-call scam protections to more users on Android

The UK pilot of Android’s in-call scam protections has already helped thousands of users end calls that could have cost them a significant amount of money. Following this success, and alongside recently launched pilots with financial apps in Brazil and India, we’ve now expanded this protection to most major UK banks.

We’ve also started to pilot this protection with more app types, including peer-to-peer (P2P) payment apps. Today, we’re taking the next step in our expansion by rolling out a pilot of this protection in the United States4 with a number of popular fintechs like Cash App and banks, including JPMorganChase.

We are committed to collaborating across the ecosystem to help keep people safe from scams. We look forward to learning from these pilots and bringing these critical safeguards to even more users in the future.

Notes


  1. Google/YouGov survey, July-August, n=5,100 (1,700 each in the US, Brazil and India), with adults who use their smartphones daily and who have been exposed to a scam or fraud attempt on their smartphone. Survey data have been weighted to smartphone population adults in each country.  

  2. Among users who use the default texting app on their smartphone.  

  3. Compatible with Android 11+ devices 

  4. US users of the US versions of the apps; rollout begins Dec. 2025 

MuddyWater targets critical infrastructure in Israel and Egypt, relying on custom malware, improved tactics, and a predictable playbook

From LinkedIn to X, GitHub to Instagram, there are plenty of opportunities to share work-related information. But posting could also get your company into trouble.