Reading view

There are new articles available, click to refresh the page.

Android expands pilot for in-call scam protection for financial apps

Posted by Aden Haussmann, Associate Product Manager and Sumeet Sharma, Play Partnerships Trust & Safety Lead

Android uses the best of Google AI and our advanced security expertise to tackle mobile scams from every angle. Over the last few years, we’ve launched industry-leading features to detect scams and protect users across phone calls, text messages and messaging app chat notifications.

These efforts are making a real difference in the lives of Android users. According to a recent YouGov survey1 commissioned by Google, Android users were 58% more likely than iOS users to report they had not received any scam texts in the prior week2.

But our work doesn’t stop there. Scammers are continuously evolving, using more sophisticated social engineering tactics to trick users into sharing their phone screen while on the phone to visit malicious websites, reveal sensitive information, send funds or download harmful apps. One popular scam involves criminals impersonating banks or other trusted institutions on the phone to try to manipulate victims into sharing their screen in order to reveal banking information or make a financial transfer.

To help combat these types of financial scams, we launched a pilot earlier this year in the UK focused on in-call protections for financial apps.

How the in-call scam protection works on Android

When you launch a participating financial app while screen sharing and on a phone call with a number that is not saved in your contacts, your Android device3 will automatically warn you about the potential dangers and give you the option to end the call and to stop screen sharing with just one tap. The warning includes a 30-second pause period before you’re able to continue, which helps break the ‘spell’ of the scammer's social engineering, disrupting the false sense of urgency and panic commonly used to manipulate you into a scam.

Bringing in-call scam protections to more users on Android

The UK pilot of Android’s in-call scam protections has already helped thousands of users end calls that could have cost them a significant amount of money. Following this success, and alongside recently launched pilots with financial apps in Brazil and India, we’ve now expanded this protection to most major UK banks.

We’ve also started to pilot this protection with more app types, including peer-to-peer (P2P) payment apps. Today, we’re taking the next step in our expansion by rolling out a pilot of this protection in the United States4 with a number of popular fintechs like Cash App and banks, including JPMorganChase.

We are committed to collaborating across the ecosystem to help keep people safe from scams. We look forward to learning from these pilots and bringing these critical safeguards to even more users in the future.

Notes


  1. Google/YouGov survey, July-August, n=5,100 (1,700 each in the US, Brazil and India), with adults who use their smartphones daily and who have been exposed to a scam or fraud attempt on their smartphone. Survey data have been weighted to smartphone population adults in each country.  

  2. Among users who use the default texting app on their smartphone.  

  3. Compatible with Android 11+ devices 

  4. US users of the US versions of the apps; rollout begins Dec. 2025 

Android Quick Share Support for AirDrop: A Secure Approach to Cross-Platform File Sharing

Posted by Dave Kleidermacher, VP, Platforms Security & Privacy, Google

Technology should bring people closer together, not create walls. Being able to communicate and connect with friends and family should be easy regardless of the phone they use. That’s why Android has been building experiences that help you stay connected across platforms.

As part of our efforts to continue to make cross-platform communication more seamless for users, we've made Quick Share interoperable with AirDrop, allowing for two-way file sharing between Android and iOS devices, starting with the Pixel 10 Family. This new feature makes it possible to quickly share your photos, videos, and files with people you choose to communicate with, without worrying about the kind of phone they use.

Most importantly, when you share personal files and content, you need to trust that it stays secure. You can share across devices with confidence knowing we built this feature with security at its core, protecting your data with strong safeguards that have been tested by independent security experts.

Secure by Design

We built Quick Share’s interoperability support for AirDrop with the same rigorous security standards that we apply to all Google products. Our approach to security is proactive and deeply integrated into every stage of the development process. This includes:

  • Threat Modeling: We identify and address potential security risks before they can become a problem.
  • Internal Security Design and Privacy Reviews: Our dedicated security and privacy teams thoroughly review the design to ensure it meets our high standards.
  • Internal Penetration Testing: We conduct extensive in-house testing to identify and fix vulnerabilities.

This Secure by Design philosophy ensures that all of our products are not just functional but also fundamentally secure.

This feature is also protected by a multi-layered security approach to ensure a safe sharing experience from end-to-end, regardless of what platform you’re on.

  • Secure Sharing Channel: The communication channel itself is hardened by our use of Rust to develop this feature. This memory-safe language is the industry benchmark for building secure systems and provides confidence that the connection is protected against buffer overflow attacks and other common vulnerabilities.
  • Built-in Platform Protections: This feature is strengthened by the robust built-in security of both Android and iOS. On Android, security is built in at every layer. Our deep investment in Rust at the OS level hardens the foundation, while proactive defenses like Google Play Protect work to keep your device safe. This is complemented by the security architecture of iOS that provides its own strong safeguards that mitigate malicious files and exploitation. These overlapping protections on both platforms work in concert with the secure connection to provide comprehensive safety for your data when you share or receive.
  • You’re in Control: Sharing across platforms works just like you're used to: a file requires your approval before being received, so you're in control of what you accept.

The Power of Rust: A Foundation of Secure Communication

A key element of our security strategy for the interoperability layer between Quick Share and AirDrop is the use of the memory-safe Rust programming language. Recognized by security agencies around the world, including the NSA and CISA, Rust is widely considered the industry benchmark for building secure systems because it eliminates entire classes of memory-safety vulnerabilities by design.

Rust is already a cornerstone of our broader initiative to eliminate memory safety bugs across Android. Its selection for this feature was deliberate, driven by the unique security challenges of cross-platform communication that demanded the most robust protections for memory safety.

The core of this feature involves receiving and parsing data sent over a wireless protocol from another device. Historically, when using a memory-unsafe language, bugs in data parsing logic are one of the most common sources of high-severity security vulnerabilities. A malformed data packet sent to a parser written in a memory-unsafe language can lead to buffer overflows and other memory corruption bugs, creating an opportunity for code execution.

This is precisely where Rust provides a robust defense. Its compiler enforces strict ownership and borrowing rules at compile time, which guarantees memory safety. Rust removes entire classes of memory-related bugs. This means our implementation is inherently resilient against attackers attempting to use maliciously crafted data packets to exploit memory errors.

Secure Sharing Using AirDrop's "Everyone" Mode

To ensure a seamless experience for both Android and iOS users, Quick Share currently works with AirDrop's "Everyone for 10 minutes" mode. This feature does not use a workaround; the connection is direct and peer-to-peer, meaning your data is never routed through a server, shared content is never logged, and no extra data is shared. As with "Everyone for 10 minutes" mode on any device when you’re sharing between non-contacts, you can ensure you're sharing with the right person by confirming their device name on your screen with them in person.

This implementation using "Everyone for 10 minutes” mode is just the first step in seamless cross-platform sharing, and we welcome the opportunity to work with Apple to enable “Contacts Only” mode in the future.

Tested by Independent Security Experts

After conducting our own secure product development, internal threat modeling, privacy reviews, and red team penetration tests, we engaged with NetSPI, a leading third-party penetration testing firm, to further validate the security of this feature and conduct an independent security assessment. The assessment found the interoperability between Quick Share and AirDrop is secure, is “notably stronger” than other industry implementations and does not leak any information.

Based on these internal and external assessments, we believe our implementation provides a strong security foundation for cross-platform file sharing for both Android and iOS users. We will continue to evaluate and enhance the implementation’s security in collaboration with additional third-party partners.

To complement this deep technical audit, we also sought expert third-party perspective on our approach from Dan Boneh, a renowned security expert and professor at Stanford University:

“Google’s work on this feature, including the use of memory safe Rust for the core communications layer, is a strong example of how to build secure interoperability, ensuring that cross-platform information sharing remains safe. I applaud the effort to open more secure information sharing between platforms and encourage Google and Apple to work together more on this."

The Future of File-Sharing Should Be Interoperable

This is just the first step as we work to improve the experience and expand it to more devices. We look forward to continuing to work with industry partners to make connecting and communicating across platforms a secure, seamless experience for all users.

Rust in Android: move fast and fix things

Posted by Jeff Vander Stoep, Android

Last year, we wrote about why a memory safety strategy that focuses on vulnerability prevention in new code quickly yields durable and compounding gains. This year we look at how this approach isn’t just fixing things, but helping us move faster.

The 2025 data continues to validate the approach, with memory safety vulnerabilities falling below 20% of total vulnerabilities for the first time.

Updated data for 2025. This data covers first-party and third-party (open source) code changes to the Android platform across C, C++, Java, Kotlin, and Rust. This post is published a couple of months before the end of 2025, but Android’s industry-standard 90-day patch window means that these results are very likely close to final. We can and will accelerate patching when necessary.

We adopted Rust for its security and are seeing a 1000x reduction in memory safety vulnerability density compared to Android’s C and C++ code. But the biggest surprise was Rust's impact on software delivery. With Rust changes having a 4x lower rollback rate and spending 25% less time in code review, the safer path is now also the faster one.

In this post, we dig into the data behind this shift and also cover:

  • How we’re expanding our reach: We're pushing to make secure code the default across our entire software stack. We have updates on Rust adoption in first-party apps, the Linux kernel, and firmware.
  • Our first rust memory safety vulnerability...almost: We'll analyze a near-miss memory safety bug in unsafe Rust: how it happened, how it was mitigated, and steps we're taking to prevent recurrence. It’s also a good chance to answer the question “if Rust can have memory safety issues, why bother at all?”

Building Better Software, Faster

Developing an operating system requires the low-level control and predictability of systems programming languages like C, C++, and Rust. While Java and Kotlin are important for Android platform development, their role is complementary to the systems languages rather than interchangeable. We introduced Rust into Android as a direct alternative to C and C++, offering a similar level of control but without many of their risks. We focus this analysis on new and actively developed code because our data shows this to be an effective approach.

When we look at development in systems languages (excluding Java and Kotlin), two trends emerge: a steep rise in Rust usage and a slower but steady decline in new C++.

Net lines of code added: Rust vs. C++, first-party Android code.
This chart focuses on first-party (Google-developed) code (unlike the previous chart that included all first-party and third-party code in Android.) We only include systems languages, C/C++ (which is primarily C++), and Rust.

The chart shows that the volume of new Rust code now rivals that of C++, enabling reliable comparisons of software development process metrics. To measure this, we use the DORA1 framework, a decade-long research program that has become the industry standard for evaluating software engineering team performance. DORA metrics focus on:

  • Throughput: the velocity of delivering software changes.
  • Stability: the quality of those changes.

Cross-language comparisons can be challenging. We use several techniques to ensure the comparisons are reliable.

  • Similar sized changes: Rust and C++ have similar functionality density, though Rust is slightly denser. This difference favors C++, but the comparison is still valid. We use Gerrit’s change size definitions.
  • Similar developer pools: We only consider first-party changes from Android platform developers. Most are software engineers at Google, and there is considerable overlap between pools with many contributing in both.
  • Track trends over time: As Rust adoption increases, are metrics changing steadily, accelerating the pace, or reverting to the mean?

Throughput

Code review is a time-consuming and high-latency part of the development process. Reworking code is a primary source of these costly delays. Data shows that Rust code requires fewer revisions. This trend has been consistent since 2023. Rust changes of a similar size need about 20% fewer revisions than their C++ counterparts.

In addition, Rust changes currently spend about 25% less time in code review compared to C++. We speculate that the significant change in favor of Rust between 2023 and 2024 is due to increased Rust expertise on the Android team.

While less rework and faster code reviews offer modest productivity gains, the most significant improvements are in the stability and quality of the changes.

Stability

Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.

For medium and large changes, the rollback rate of Rust changes in Android is ~4x lower than C++. This low rollback rate doesn't just indicate stability; it actively improves overall development throughput. Rollbacks are highly disruptive to productivity, introducing organizational friction and mobilizing resources far beyond the developer who submitted the faulty change. Rollbacks necessitate rework and more code reviews, can also lead to build respins, postmortems, and blockage of other teams. Resulting postmortems often introduce new safeguards that add even more development overhead.

In a self-reported survey from 2022, Google software engineers reported that Rust is both easier to review and more likely to be correct. The hard data on rollback rates and review times validates those impressions.

Putting it all together

Historically, security improvements often came at a cost. More security meant more process, slower performance, or delayed features, forcing trade-offs between security and other product goals. The shift to Rust is different: we are significantly improving security and key development efficiency and product stability metrics.

Expanding Our Reach

With Rust support now mature for building Android system services and libraries, we are focused on bringing its security and productivity advantages elsewhere.

  • Kernel: Android’s 6.12 Linux kernel is our first kernel with Rust support enabled and our first production Rust driver. More exciting projects are underway, such as our ongoing collaboration with Arm and Collabora on a Rust-based kernel-mode GPU driver.
  • Firmware: The combination of high privilege, performance constraints, and limited applicability of many security measures makes firmware both high-risk, and challenging to secure. Moving firmware to Rust can yield a major improvement in security. We have been deploying Rust in firmware for years now, and even released tutorials, training, and code for the wider community. We’re particularly excited about our collaboration with Arm on Rusted Firmware-A.
  • First-party applications: Rust is ensuring memory safety from the ground up in several security-critical Google applications, such as:
    • Nearby Presence: The protocol for securely and privately discovering local devices over Bluetooth is implemented in Rust and is currently running in Google Play Services.
    • MLS: The protocol for secure RCS messaging is implemented in Rust and will be included in the Google Messages app in a future release.
    • Chromium: Parsers for PNG, JSON, and web fonts have been replaced with memory-safe implementations in Rust, making it easier for Chromium engineers to deal with data from the web while following the Rule of 2.


These examples highlight Rust's role in reducing security risks, but memory-safe languages are only one part of a comprehensive memory safety strategy. We continue to employ a defense-in-depth approach, the value of which was clearly demonstrated in a recent near-miss.

Our First Rust Memory Safety Vulnerability...Almost

We recently avoided shipping our very first Rust-based memory safety vulnerability: a linear buffer overflow in CrabbyAVIF. It was a near-miss. To ensure the patch received high priority and was tracked through release channels, we assigned it the identifier CVE-2025-48530. While it’s great that the vulnerability never made it into a public release, the near-miss offers valuable lessons. The following sections highlight key takeaways from our postmortem.

Scudo Hardened Allocator for the Win

A key finding is that Android’s Scudo hardened allocator deterministically rendered this vulnerability non-exploitable due to guard pages surrounding secondary allocations. While Scudo is Android’s default allocator, used on Google Pixel and many other devices, we continue to work with partners to make it mandatory. In the meantime, we will issue CVEs of sufficient severity for vulnerabilities that could be prevented by Scudo.

In addition to protecting against overflows, Scudo’s use of guard pages helped identify this issue by changing an overflow from silent memory corruption into a noisy crash. However, we did discover a gap in our crash reporting: it failed to clearly show that the crash was a result of an overflow, which slowed down triage and response. This has been fixed, and we now have a clear signal when overflows occur into Scudo guard pages.

Unsafe Review and Training

Operating system development requires unsafe code, typically C, C++, or unsafe Rust (for example, for FFI and interacting with hardware), so simply banning unsafe code is not workable. When developers must use unsafe, they should understand how to do so soundly and responsibly

To that end, we are adding a new deep dive on unsafe code to our Comprehensive Rust training. This new module, currently in development, aims to teach developers how to reason about unsafe Rust code, soundness and undefined behavior, as well as best practices like safety comments and encapsulating unsafe code in safe abstractions.

Better understanding of unsafe Rust will lead to even higher quality and more secure code across the open source software ecosystem and within Android. As we'll discuss in the next section, our unsafe Rust is already really quite safe. It’s exciting to consider just how high the bar can go.

Comparing Vulnerability Densities

This near-miss inevitably raises the question: "If Rust can have memory safety vulnerabilities, then what’s the point?"

The point is that the density is drastically lower. So much lower that it represents a major shift in security posture. Based on our near-miss, we can make a conservative estimate. With roughly 5 million lines of Rust in the Android platform and one potential memory safety vulnerability found (and fixed pre-release), our estimated vulnerability density for Rust is 0.2 vuln per 1 million lines (MLOC).

Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.

Memory safety rightfully receives significant focus because the vulnerability class is uniquely powerful and (historically) highly prevalent. High vulnerability density undermines otherwise solid security design because these flaws can be chained to bypass defenses, including those specifically targeting memory safety exploits. Significantly lowering vulnerability density does not just reduce the number of bugs; it dramatically boosts the effectiveness of our entire security architecture.

The primary security concern regarding Rust generally centers on the approximately 4% of code written within unsafe{} blocks. This subset of Rust has fueled significant speculation, misconceptions, and even theories that unsafe Rust might be more buggy than C. Empirical evidence shows this to be quite wrong.

Our data indicates that even a more conservative assumption, that a line of unsafe Rust is as likely to have a bug as a line of C or C++, significantly overestimates the risk of unsafe Rust. We don’t know for sure why this is the case, but there are likely several contributing factors:

  • unsafe{} doesn't actually disable all or even most of Rust’s safety checks (a common misconception).
  • The practice of encapsulation enables local reasoning about safety invariants.
  • The additional scrutiny that unsafe{} blocks receive.

Final Thoughts

Historically, we had to accept a trade-off: mitigating the risks of memory safety defects required substantial investments in static analysis, runtime mitigations, sandboxing, and reactive patching. This approach attempted to move fast and then pick up the pieces afterwards. These layered protections were essential, but they came at a high cost to performance and developer productivity, while still providing insufficient assurance.

While C and C++ will persist, and both software and hardware safety mechanisms remain critical for layered defense, the transition to Rust is a different approach where the more secure path is also demonstrably more efficient. Instead of moving fast and then later fixing the mess, we can move faster while fixing things. And who knows, as our code gets increasingly safe, perhaps we can start to reclaim even more of that performance and productivity that we exchanged for security, all while also improving security.

Acknowledgments

Thank you to the following individuals for their contributions to this post:

  • Ivan Lozano for compiling the detailed postmortem on CVE-2025-48530.
  • Chris Ferris for validating the postmortem’s findings and improving Scudo’s crash handling as a result.
  • Dmytro Hrybenko for leading the effort to develop training for unsafe Rust and for providing extensive feedback on this post.
  • Alex Rebert and Lars Bergstrom for their valuable suggestions and extensive feedback on this post.
  • Peter Slatala, Matthew Riley, and Marshall Pierce for providing information on some of the places where Rust is being used in Google's apps.

Finally, a tremendous thank you to the Android Rust team, and the entire Android organization for your relentless commitment to engineering excellence and continuous improvement.

Notes


  1. The DevOps Research and Assessment (DORA) program is published by Google Cloud. 

How Android provides the most effective protection to keep you safe from mobile scams

Posted by Lyubov Farafonova, Product Manager, Phone by Google; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse; Vijay Pareek, Manager, Android Messaging Trust and Safety
As Cybersecurity Awareness Month wraps up, we’re focusing on one of today's most pervasive digital threats: mobile scams. In the last 12 months, fraudsters have used advanced AI tools to create more convincing schemes, resulting in over $400 billion in stolen funds globally.¹

For years, Android has been on the frontlines in the battle against scammers, using the best of Google AI to build proactive, multi-layered protections that can anticipate and block scams before they reach you. Android’s scam defenses protect users around the world from over 10 billion suspected malicious calls and messages every month2. In addition, Google continuously performs safety checks to maintain the integrity of the RCS service. In the past month alone, this ongoing process blocked over 100 million suspicious numbers from using RCS, stopping potential scams before they could even be sent.

To show how our scam protections work in the real world, we asked users and independent security experts to compare how well Android and iOS protect you from these threats. We're also releasing a new report that explains how modern text scams are orchestrated, helping you understand the tactics fraudsters use and how to spot them.

Survey shows Android users’ confidence in scam protections

Google and YouGov3 surveyed 5,000 smartphone users across the U.S., India, and Brazil about their scam experiences. The findings were clear: Android users reported receiving fewer scam texts and felt more confident that their device was keeping them safe.

  • Android users were 58% more likely than iOS users to say they had not received any scam texts in the week prior to the survey. The advantage was even stronger on Pixel, where users were 96% more likely than iPhone owners to report zero scam texts4.
  • At the other end of the spectrum, iOS users were 65% more likely than Android users to report receiving three or more scam texts in a week. The difference became even more pronounced when comparing iPhone to Pixel, with iPhone users 136% more likely to say they had received a heavy volume of scam messages4.
  • Android users were 20% more likely than iOS users to describe their device’s scam protections as “very effective” or “extremely effective.” When comparing Pixel to iPhone, iPhone users were 150% more likely to say their device was not effective at all in stopping mobile fraud.

YouGov study findings on users’ experience with scams on Android and iOS

Security researchers and analysts highlight Android’s AI-driven safeguards against sophisticated scams

In a recent evaluation by Counterpoint Research5, a global technology market research firm, Android smartphones were found to have the most AI-powered protections. The independent study compared the latest Pixel, Samsung, Motorola, and iPhone devices, and found that Android provides comprehensive AI-driven safeguards across ten key protection areas, including email protections, browsing protections, and on-device behavioral protections. By contrast, iOS offered AI-powered protections in only two categories. You can see the full comparison in the visual below.

Counterpoint Research comparison of Android and iOS AI-powered protections

Cybersecurity firm Leviathan Security Group conducted a funded evaluation6 of scam and fraud protection on the iPhone 17, Moto Razr+ 2025, Pixel 10 Pro, and Samsung Galaxy Z Fold 7. Their analysis found that Android smartphones, led by the Pixel 10 Pro, provide the highest level of default scam and fraud protection.The report particularly noted Android's robust call screening, scam detection, and real-time scam warning authentication capabilities as key differentiators. Taken together, these independent expert assessments conclude that Android’s AI-driven safeguards provide more comprehensive and intelligent protection against mobile scams.

Leviathan Security Group comparison of scam protections across various devices

Why Android users see fewer scams

Android’s proactive protections work across the platform to help you stay ahead of threats with the best of Google AI.

Here’s how they work:

  • Keeping your messages safe: Google Messages automatically filters known spam by analyzing sender reputation and message content, moving suspicious texts directly to your "spam & blocked" folder to keep them out of sight. For more complex threats, Scam Detection uses on-device AI to analyze messages from unknown senders for patterns of conversational scams (like pig butchering) and provide real-time warnings6. This helps secure your privacy while providing a robust shield against text scams. As an extra safeguard, Google Messages also helps block suspicious links in messages that are determined to be spam or scams.
  • Combatting phone call scams: Phone by Google automatically blocks known spam calls so your phone never even rings, while Call Screen5 can answer the call on your behalf to identify fraudsters. If you answer, the protection continues with Scam Detection, which uses on-device AI to provide real-time warnings for suspicious conversational patterns6. This processing is completely ephemeral, meaning no call content is ever saved or leaves your device. Android also helps stop social engineering during the call itself by blocking high-risk actions7 like installing untrusted apps or disabling security settings, and warns you if your screen is being shared unknowingly.

These safeguards are built directly into the core of Android, alongside other features like real-time app scanning in Google Play Protect and enhanced Safe Browsing in Chrome using LLMs. With Android, you can trust that you have intelligent, multi-layered protection against scams working for you.

Android is always evolving to keep you one step ahead of scams

In a world of evolving digital threats, you deserve to feel confident that your phone is keeping you safe. That’s why we use the best of Google AI to build intelligent protections that are always improving and work for you around the clock, so you can connect, browse, and communicate with peace of mind.

See these protections in action in our new infographic and learn more about phone call scams in our 2025 Phone by Google Scam Report.


1: Data from Global Anti-Scam Alliance, October 2025

2: This total comprises all instances where a message or call was proactively blocked or where a user was alerted to potential spam or scam activity.

3: Google/YouGov survey, July-August, n=5,100 (1,700 each in the US, Brazil and India), with adults who use their smartphones daily and who have been exposed to a scam or fraud attempt on their smartphone. Survey data have been weighted to smartphone population adults in each country.

4: Among users who use the default texting app on their smartphone

5: Google/Counterpoint Research, “Assessing the State of AI-Powered Mobile Security”, Oct. 2025; based on comparing the Pixel 10 Pro, iPhone 17 Pro, Samsung Galaxy S25 Ultra, OnePlus 13, Motorola Razr+ 2025. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries.

6: Google/Leviathan Security Group, “October 2025 Mobile Platform Security & Fraud Prevention Assessment”, Oct. 2025; based on comparing the Pixel 10 Pro, iPhone 17 Pro, Samsung Galaxy Z Fold 7 and Motorola Razr+ 2025. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries.

7: Accuracy may vary. Availability varies.

HTTPS by default

By: Google

One year from now, with the release of Chrome 154 in October 2026, we will change the default settings of Chrome to enable “Always Use Secure Connections”. This means Chrome will ask for the user's permission before the first access to any public site without HTTPS.

The “Always Use Secure Connections” setting warns users before accessing a site without HTTPS

Chrome Security's mission is to make it safe to click on links. Part of being safe means ensuring that when a user types a URL or clicks on a link, the browser ends up where the user intended. When links don't use HTTPS, an attacker can hijack the navigation and force Chrome users to load arbitrary, attacker-controlled resources, and expose the user to malware, targeted exploitation, or social engineering attacks. Attacks like this are not hypothetical—software to hijack navigations is readily available and attackers have previously used insecure HTTP to compromise user devices in a targeted attack.

Since attackers only need a single insecure navigation, they don't need to worry that many sites have adopted HTTPS—any single HTTP navigation may offer a foothold. What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place.

To address this risk, we launched the “Always Use Secure Connections” setting in 2022 as an opt-in option. In this mode, Chrome attempts every connection over HTTPS, and shows a bypassable warning to the user if HTTPS is unavailable. We also previously discussed our intent to move towards HTTPS by default. We now think the time has come to enable “Always Use Secure Connections” for all users by default.

Now is the time.

For more than a decade, Google has published the HTTPS transparency report, which tracks the percentage of navigations in Chrome that use HTTPS. For the first several years of the report, numbers saw an impressive climb, starting at around 30-45% in 2015, and ending up around the 95-99% range around 2020. Since then, progress has largely plateaued.

HTTPS adoption expressed as a percentage of main frame page loads

This rise represents a tremendous improvement to the security of the web, and demonstrates that HTTPS is now mature and widespread. This level of adoption is what makes it possible to consider stronger mitigations against the remaining insecure HTTP.

Balancing user safety with friction

While it may at first seem that 95% HTTPS means that the problem is mostly solved, the truth is that a few percentage points of HTTP navigations is still a lot of navigations. Since HTTP navigations remain a regular occurrence for most Chrome users, a naive approach to warning on all HTTP navigations would be quite disruptive. At the same time, as the plateau demonstrates, doing nothing would allow this risk to persist indefinitely. To balance these risks, we have taken steps to ensure that we can help the web move towards safer defaults, while limiting the potential annoyance warnings will cause to users.

One way we're balancing risks to users is by making sure Chrome does not warn about the same sites excessively. In all variants of the "Always Use Secure Connections" settings, so long as the user regularly visits an insecure site, Chrome will not warn the user about that site repeatedly. This means that rather than warn users about 1 out of 50 navigations, Chrome will only warn users when they visit a new (or not recently visited) site without using HTTPS.

To further address the issue, it's important to understand what sort of traffic is still using HTTP. The largest contributor to insecure HTTP by far, and the largest contributor to variation across platforms, is insecure navigations to private sites. The graph above includes both those to public sites, such as example.com, and navigations to private sites, such as local IP addresses like 192.168.0.1, single-label hostnames, and shortlinks like intranet/. While it is free and easy to get an HTTPS certificate that is trusted by Chrome for a public site, acquiring an HTTPS certificate for a private site unfortunately remains complicated. This is because private names are "non-unique"—private names can refer to different hosts on different networks. There is no single owner of 192.168.0.1 for a certification authority to validate and issue a certificate to.

HTTP navigations to private sites can still be risky, but are typically less dangerous than their public site counterparts because there are fewer ways for an attacker to take advantage of these HTTP navigations. HTTP on private sites can only be abused by an attacker also on your local network, like on your home wifi or in a corporate network.

If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only. Windows increases from 95% to 98% HTTPS, and both Android and Mac increase to over 99% HTTPS.

In recognition of the reduced risk HTTP to private sites represents, last year we introduced a variant of “Always Use Secure Connections” for public sites only. For users who frequently access private sites (such as those in enterprise settings, or web developers), excluding warnings on private sites significantly reduces the volume of warnings those users will see. Simultaneously, for users who do not access private sites frequently, this mode introduces only a small reduction in protection. This is the variant we intend to enable for all users next year.

“Always Use Secure Connections,” available at chrome://settings/security

In Chrome 141, we experimented with enabling “Always Use Secure Connections” for public sites by default for a small percentage of users. We wanted to validate our expectations that this setting keeps users safer without burdening them with excessive warnings.

Analyzing the data from the experiment, we confirmed that the number of warnings seen by any users is considerably lower than 3% of navigations—in fact, the median user sees fewer than one warning per week, and the ninety-fifth percentile user sees fewer than three warnings per week..

Understanding HTTP usage

Once “Always Use Secure Connections” is the default and additional sites migrate away from HTTP, we expect the actual warning volume to be even lower than it is now. In parallel to our experiments, we have reached out to a number of companies responsible for the most HTTP navigations, and expect that they will be able to migrate away from HTTP before the change in Chrome 154. For many of these organizations, transitioning to HTTPS isn't disproportionately hard, but simply has not received attention. For example, many of these sites use HTTP only for navigations that immediately redirect to HTTPS sites—an insecure interaction which was previously completely invisible to users.

Another current use case for HTTP is to avoid mixed content blocking when accessing devices on the local network. Private addresses, as discussed above, often do not have trusted HTTPS certificates, due to the difficulties of validating ownership of a non-unique name. This means most local network traffic is over HTTP, and cannot be initiated from an HTTPS page—the HTTP traffic counts as insecure mixed content, and is blocked. One common use case for needing to access the local network is to configure a local network device, e.g. the manufacturer might host a configuration portal at config.example.com, which then sends requests to a local device to configure it.

Previously, these types of pages needed to be hosted without HTTPS to avoid mixed content blocking. However, we recently introduced a local network access permission, which both prevents sites from accessing the user’s local network without consent, but also allows an HTTPS site to bypass mixed content checks for the local network once the permission has been granted. This can unblock migrating these domains to HTTPS.

Changes in Chrome

We will enable the "Always Use Secure Connections" setting in its public-sites variant by default in October 2026, with the release of Chrome 154. Prior to enabling it by default for all users, in Chrome 147, releasing in April 2026, we will enable Always Use Secure Connections in its public-sites variant for the over 1 billion users who have opted-in to Enhanced Safe Browsing protections in Chrome.

While it is our hope and expectation that this transition will be relatively painless for most users, users will still be able to disable the warnings by disabling the "Always Use Secure Connections" setting.

If you are a website developer or IT professional, and you have users who may be impacted by this feature, we very strongly recommend enabling the "Always Use Secure Connections" setting today to help identify sites that you may need to work to migrate. IT professionals may find it useful to read our additional resources to better understand the circumstances where warnings will be shown, how to mitigate them, and how organizations that manage Chrome clients (like enterprises or educational institutions) can ensure that Chrome shows the right warnings to meet those organizations' needs.

Looking Forward

While we believe that warning on insecure public sites represents a significant step forward for the security of the web, there is still more work to be done. In the future, we hope to work to further reduce barriers to adoption of HTTPS, especially for local network sites. This work will hopefully enable even more robust HTTP protections down the road.

Posted by Chris Thompson, Mustafa Emre Acer, Serena Chen, Joe DeBlasio, Emily Stark and David Adrian, Chrome Security Team

Accelerating adoption of AI for cybersecurity at DEF CON 33

Posted by Elie Bursztein and Marianna Tishchenko, Google Privacy, Safety and Security Team

Empowering cyber defenders with AI is critical to tilting the cybersecurity balance back in their favor as they battle cybercriminals and keep users safe. To help accelerate adoption of AI for cybersecurity workflows, we partnered with Airbus at DEF CON 33 to host the GenSec Capture the Flag (CTF), dedicated to human-AI collaboration in cybersecurity. Our goal was to create a fun, interactive environment, where participants across various skill levels could explore how AI can accelerate their daily cybersecurity workflows.



At GenSec CTF, nearly 500 participants successfully completed introductory challenges, with 23% of participants using AI for cybersecurity for the very first time. An overwhelming 85% of all participants found the event useful for learning how AI can be applied to security workflows. This positive feedback highlights that AI-centric CTFs can play a vital role in speeding up AI education and adoption in the security community.


The CTF also offered a valuable opportunity for the community to use Sec-Gemini, Google’s experimental Cybersecurity AI, as an optional assistant available in the UI alongside major LLMs. And we received great feedback on Sec-Gemini, with 77% of respondents saying that they had found Sec-Gemini either “very helpful” or “extremely helpful” in assisting them with solving the challenges.  


We want to thank the DEF CON community for the enthusiastic participation and for making this inaugural event a resounding success. The community feedback during the event has been invaluable for understanding how to improve Sec-Gemini, and we are already incorporating some of the lessons learned into the next iteration. 


We are committed to advancing the AI cybersecurity frontier and will continue working with the community to build tools that help protect people online. Stay tuned as we plan to share more research and key learnings from the CTF with the broader community.



Supporting Rowhammer research to protect the DRAM ecosystem

Posted by Daniel Moghimi

Rowhammer is a complex class of vulnerabilities across the industry. It is a hardware vulnerability in DRAM where repeatedly accessing a row of memory can cause bit flips in adjacent rows, leading to data corruption. This can be exploited by attackers to gain unauthorized access to data, escalate privileges, or cause denial of service. Hardware vendors have deployed various mitigations, such as ECC and Target Row Refresh (TRR) for DDR5 memory, to mitigate Rowhammer and enhance DRAM reliability. However, the resilience of those mitigations against sophisticated attackers remains an open question.

To address this gap and help the ecosystem with deploying robust defenses, Google has supported academic research and developed test platforms to analyze DDR5 memory. Our effort has led to the discovery of new attacks and a deeper understanding of Rowhammer on the current DRAM modules, helping to forge the way for further, stronger mitigations.

What is Rowhammer? 

Rowhammer exploits a vulnerability in DRAM. DRAM cells store data as electrical charges, but these electric charges leak over time, causing data corruption. To prevent data loss, the memory controller periodically refreshes the cells. However, if a cell discharges before the refresh cycle, its stored bit may corrupt. Initially considered a reliability issue, it has been leveraged by security researchers to demonstrate privilege escalation attacks. By repeatedly accessing a memory row, an attacker can cause bit flips in neighboring rows. An adversary can exploit Rowhammer via:

  1. Reliably cause bit flips by repeatedly accessing adjacent DRAM rows.

  2. Coerce other applications or the OS into using these vulnerable memory pages.

  3. Target security-sensitive code or data to achieve privilege escalation.

  4. Or simply corrupt system’s memory to cause denial of service

Previous work has repeatedly demonstrated the possibility of such attacks from software [Revisiting rowhammer, Are we susceptible to rowhammer?, DrammerFlip feng shui, Jolt]. As a result, defending against Rowhammer is required for secure isolation in multi-tenant environments like the cloud. 

Rowhammer Mitigations 

The primary approach to mitigate Rowhammer is to detect which memory rows are being aggressively accessed and refreshing nearby rows before a bit flip occurs. TRR is a common example, which uses a number of counters to track accesses to a small number of rows adjacent to a potential victim row. If the access count for these aggressor rows reaches a certain threshold, the system issues a refresh to the victim row. TRR can be incorporated within the DRAM or in the host CPU.

However, this mitigation is not foolproof. For example, the TRRespass attack showed that by simultaneously hammering multiple, non-adjacent rows, TRR can be bypassed. Over the past couple of years, more sophisticated attacks [Half-Double, Blacksmith] have emerged, introducing more efficient attack patterns. 

In response, one of our efforts was to collaborate with JEDEC, external researchers, and experts to define the PRAC as a new mitigation that deterministically detects Rowhammer by tracking all memory rows. 

However, current systems equipped with DDR5 lack support for PRAC or other robust mitigations. As a result, they rely on probabilistic approaches such as ECC and enhanced TRR to reduce the risk. While these measures have mitigated older attacks, their overall effectiveness against new techniques was not fully understood until our recent findings.

Challenges with Rowhammer Assessment 

Mitigating Rowhammer attacks involves making it difficult for an attacker to reliably cause bit flips from software. Therefore, for an effective mitigation, we have to understand how a determined adversary introduces memory accesses that bypass existing mitigations. Three key information components can help with such an analysis:

  1. How the improved TRR and in-DRAM ECC work.

  2. How memory access patterns from software translate into low-level DDR commands.

  3. (Optionally) How any mitigations (e.g., ECC or TRR) in the host processor work.

The first step is particularly challenging and involves reverse-engineering the proprietary in-DRAM TRR mechanism, which varies significantly between different manufacturers and device models. This process requires the ability to issue precise DDR commands to DRAM and analyze its responses, which is difficult on an off-the-shelf system. Therefore, specialized test platforms are essential.

The second and third steps involve analyzing the DDR traffic between the host processor and the DRAM. This can be done using an off-the-shelf interposer, a tool that sits between the processor and DRAM. A crucial part of this analysis is understanding how a live system translates software-level memory accesses into the DDR protocol.

The third step, which involves analyzing host-side mitigations, is sometimes optional. For example, host-side ECC (Error Correcting Code) is enabled by default on servers, while host-side TRR has only been implemented in some CPUs. 

Rowhammer testing platforms

For the first challenge, we partnered with Antmicro to develop two specialized, open-source FPGA-based Rowhammer test platforms. These platforms allow us to conduct in-depth testing on different types of DDR5 modules.

  • DDR5 RDIMM Platform: A new DDR5 Tester board to meet the hardware requirements of Registered DIMM (RDIMM) memory, common in server computers.

  • SO-DIMM Platform: A version that supports the standard SO-DIMM pinout compatible with off-the-shelf DDR5 SO-DIMM memory sticks, common in workstations and end-user devices.

Antmicro designed and manufactured these open-source platforms and we worked closely with them, and researchers from ETH Zurich, to test the applicability of these platforms for analyzing off-the-shelf memory modules in RDIMM and SO-DIMM forms.


Antmicro DDR5 RDIMM FPGA test platform in action.

Phoenix Attacks on DDR5

In collaboration with researchers from ETH, we applied the new Rowhammer test platforms to evaluate the effectiveness of current in-DRAM DDR5 mitigations. Our findings, detailed in the recently co-authored "Phoenix” research paper, reveal that we successfully developed custom attack patterns capable of bypassing enhanced TRR (Target Row Refresh) defense on DDR5 memory. We were able to create a novel self-correcting refresh synchronization attack technique, which allowed us to perform the first-ever Rowhammer privilege escalation exploit on a standard, production-grade desktop system equipped with DDR5 memory. While this experiment was conducted on an off-the-shelf workstation equipped with recent AMD Zen processors and SK Hynix DDR5 memory, we continue to investigate the applicability of our findings to other hardware configurations.

Lessons learned 

We showed that current mitigations for Rowhammer attacks are not sufficient, and the issue remains a widespread problem across the industry. They do make it more difficult “but not impossible” to carry out attacks, since an attacker needs an in-depth understanding of the specific memory subsystem architecture they wish to target.


Current mitigations based on TRR and ECC rely on probabilistic countermeasures that have insufficient entropy. Once an analyst understands how TRR operates, they can craft specific memory access patterns to bypass it. Furthermore, current ECC schemes were not designed as a security measure and are therefore incapable of reliably detecting errors.


Memory encryption is an alternative countermeasure for Rowhammer. However, our current assessment is that without cryptographic integrity, it offers no valuable defense against Rowhammer. More research is needed to develop viable, practical encryption and integrity solutions.

Path forward

Google has been a leader in JEDEC standardization efforts, for instance with PRAC, a fully approved standard to be supported in upcoming versions of DDR5/LPDDR6. It works by accurately counting the number of times a DRAM wordline is activated and alerts the system if an excessive number of activations is detected. This close coordination between the DRAM and the system gives PRAC a reliable way to address Rowhammer. 


In the meantime, we continue to evaluate and improve other countermeasures to ensure our workloads are resilient against Rowhammer. We collaborate with our academic and industry partners to improve analysis techniques and test platforms, and to share our findings with the broader ecosystem.

Want to learn more?

“Phoenix: Rowhammer Attacks on DDR5 with Self-Correcting Synchronization” will be presented at IEEE Security & Privacy 2026 in San Francisco, CA (MAY 18-21, 2026).

How Pixel and Android are bringing a new level of trust to your images with C2PA Content Credentials

Posted by Eric Lynch, Senior Product Manager, Android Security, and Sherif Hanna, Group Product Manager, Google C2PA Core

At Made by Google 2025, we announced that the new Google Pixel 10 phones will support C2PA Content Credentials in Pixel Camera and Google Photos. This announcement represents a series of steps towards greater digital media transparency:

  • The Pixel 10 lineup is the first to have Content Credentials built in across every photo created by Pixel Camera.
  • The Pixel Camera app achieved Assurance Level 2, the highest security rating currently defined by the C2PA Conformance Program. Assurance Level 2 for a mobile app is currently only possible on the Android platform.
  • A private-by-design approach to C2PA certificate management, where no image or group of images can be related to one another or the person who created them.
  • Pixel 10 phones support on-device trusted time-stamps, which ensures images captured with your native camera app can be trusted after the certificate expires, even if they were captured when your device was offline.

These capabilities are powered by Google Tensor G5, Titan M2 security chip, the advanced hardware-backed security features of the Android platform, and Pixel engineering expertise.

In this post, we’ll break down our architectural blueprint for bringing a new level of trust to digital media, and how developers can apply this model to their own apps on Android.

A New Approach to Content Credentials

Generative AI can help us all to be more creative, productive, and innovative. But it can be hard to tell the difference between content that’s been AI-generated, and content created without AI. The ability to verify the source and history—or provenance—of digital content is more important than ever.

Content Credentials convey a rich set of information about how media such as images, videos, or audio files were made, protected by the same digital signature technology that has secured online transactions and mobile apps for decades. It empowers users to identify AI-generated (or altered) content, helping to foster transparency and trust in generative AI. It can be complemented by watermarking technologies such as SynthID.

Content Credentials are an industry standard backed by a broad coalition of leading companies for securely conveying the origin and history of media files. The standard is developed by the Coalition for Content Provenance and Authenticity (C2PA), of which Google is a steering committee member.

The traditional approach to classifying digital image content has focused on categorizing content as “AI” vs. “not AI”. This has been the basis for many legislative efforts, which have required the labeling of synthetic media. This traditional approach has drawbacks, as described in Chapter 5 of this seminal report by Google. Research shows that if only synthetic content is labeled as “AI”, then users falsely believe unlabeled content is “not AI”, a phenomenon called “the implied truth effect”. This is why Google is taking a different approach to applying C2PA Content Credentials.

Instead of categorizing digital content into a simplistic “AI” vs. “not AI”, Pixel 10 takes the first steps toward implementing our vision of categorizing digital content as either i) media that comes with verifiable proof of how it was made or ii) media that doesn't.

  • Pixel Camera attaches Content Credentials to any JPEG photo capture, with the appropriate description as defined by the Content Credentials specification for each capture mode.
  • Google Photos attaches Content Credentials to JPEG images that already have Content Credentials and are edited using AI or non-AI tools, and also to any images that are edited using AI tools. It will validate and display Content Credentials under a new section in the About panel, if the JPEG image being viewed contains this data. Learn more about it in Google Photos Help.

Given the broad range of scenarios in which Content Credentials are attached by these apps, we designed our C2PA implementation architecture from the onset to be:

  1. Secure from silicon to applications
  2. Verifiable, not personally identifiable
  3. Useable offline

Secure from Silicon to Applications

Good actors in the C2PA ecosystem are motivated to ensure that provenance data is trustworthy. C2PA Certification Authorities (CAs), such as Google, are incentivized to only issue certificates to genuine instances of apps from trusted developers in order to prevent bad actors from undermining the system. Similarly, app developers want to protect their C2PA claim signing keys from unauthorized use. And of course, users want assurance that the media files they rely on come from where they claim. For these reasons, the C2PA defined the Conformance Program.

The Pixel Camera application on the Pixel 10 lineup has achieved Assurance Level 2, the highest security rating currently defined by the C2PA Conformance Program. This was made possible by a strong set of hardware-backed technologies, including Tensor G5 and the certified Titan M2 security chip, along with Android’s hardware-backed security APIs. Only mobile apps running on devices that have the necessary silicon features and Android APIs can be designed to achieve this assurance level. We are working with C2PA to help define future assurance levels that will push protections even deeper into hardware.

Achieving Assurance Level 2 requires verifiable, difficult-to-forge evidence. Google has built an end-to-end system on Pixel 10 devices that verifies several key attributes. However, the security of any claim is fundamentally dependent on the integrity of the application and the OS, an integrity that relies on both being kept current with the latest security patches.

  • Hardware Trust: Android Key Attestation in Pixel 10 is built on support for Device Identifier Composition Engine (DICE) by Tensor, and Remote Key Provisioning (RKP) to establish a trust chain from the moment the device starts up to the OS, stamping out the most common forms of abuse on Android.
  • Genuine Device and Software: Aided by the hardware trust described above, Android Key Attestation allows Google C2PA Certification Authorities (CAs) to verify that they are communicating with a genuine physical device. It also allows them to verify the device has booted securely into a Play Protect Certified version of Android, and verify how recently the operating system, bootloader, and system software and firmware were patched for security vulnerabilities.
  • Genuine Application: Hardware-backed Android Key Attestation certificates include the package name and signing certificates associated with the app that requested the generation of the C2PA signing key, allowing Google C2PA CAs to check that the app requesting C2PA claim signing certificates is a trusted, registered app.
  • Tamper-Resistant Key Storage: On Pixel, C2PA claim signing keys are generated and stored using Android StrongBox in the Titan M2 security chip. Titan M2 is Common Criteria PP.0084 AVA_VAN.5 certified, meaning that it is strongly resistant to extracting or tampering with the cryptographic keys stored in it. Android Key Attestation allows Google C2PA CAs to verify that private keys were indeed created inside this hardware-protected vault before issuing certificates for their public key counterparts.

The C2PA Conformance Program requires verifiable artifacts backed by a hardware Root of Trust, which Android provides through features like Key Attestation. This means Android developers can leverage these same tools to build apps that meet this standard for their users.

Privacy Built on a Foundation of Trust: Verifiable, Not Personally Identifiable

The robust security stack we described is the foundation of privacy. But Google takes steps further to ensure your privacy even as you use Content Credentials, which required solving two additional challenges:

Challenge 1: Server-side Processing of Certificate Requests. Google’s C2PA Certification Authorities must certify new cryptographic keys generated on-device. To prevent fraud, these certificate enrollment requests need to be authenticated. A more common approach would require user accounts for authentication, but this would create a server-side record linking a user's identity to their C2PA certificates—a privacy trade-off we were unwilling to make.

Our Solution: Anonymous, Hardware-Backed Attestation. We solve this with Android Key Attestation, which allows Google CAs to verify what is being used (a genuine app on a secure device) without ever knowing who is using it (the user). Our CAs also enforce a strict no-logging policy for information like IP addresses that could tie a certificate back to a user.

Challenge 2: The Risk of Traceability Through Key Reuse. A significant privacy risk in any provenance system is traceability. If the same device or app-specific cryptographic key is used to sign multiple photos, those images can be linked by comparing the key. An adversary could potentially connect a photo someone posts publicly under their real name with a photo they post anonymously, deanonymizing the creator.

Our Solution: Unique Certificates. We eliminate this threat with a maximally private approach. Each key and certificate is used to sign exactly one image. No two images ever share the same public key, a "One-and-Done" Certificate Management Strategy, making it cryptographically impossible to link them. This engineering investment in user privacy is designed to set a clear standard for the industry.

Overall, you can use Content Credentials on Pixel 10 without fear that another person or Google could use it to link any of your images to you or one another.

Ready to Use When You Are - Even Offline

Implementations of Content Credentials use trusted time-stamps to ensure the credentials can be validated even after the certificate used to produce them expires. Obtaining these trusted time-stamps typically requires connectivity to a Time-Stamping Authority (TSA) server. But what happens if the device is offline?

This is not a far-fetched scenario. Imagine you’ve captured a stunning photo of a remote waterfall. The image has Content Credentials that prove that it was captured by a camera, but the cryptographic certificate used to produce them will eventually expire. Without a time-stamp, that proof could become untrusted, and you're too far from a cell signal, which is required to receive one.

To solve this, Pixel developed an on-device, offline TSA.

Powered by the security features of Tensor, Pixel maintains a trusted clock in a secure environment, completely isolated from the user-controlled one in Android. The clock is synchronized regularly from a trusted source while the device is online, and is maintained even after the device goes offline (as long as the phone remains powered on). This allows your device to generate its own cryptographically-signed time-stamps the moment you press the shutter—no connection required. It ensures the story behind your photo remains verifiable and trusted after its certificate expires, whether you took it in your living room or at the top of a mountain.

Building a More Trustworthy Ecosystem, Together

C2PA Content Credentials are not the sole solution for identifying the provenance of digital media. They are, however, a tangible step toward more media transparency and trust as we continue to unlock more human creativity with AI.

In our initial implementation of Content Credentials on the Android platform and Pixel 10 lineup, we prioritized a higher standard of privacy, security, and usability. We invite other implementers of Content Credentials to evaluate our approach and leverage these same foundational hardware and software security primitives. The full potential of these technologies can only be realized through widespread ecosystem adoption.

We look forward to adding Content Credentials across more Google products in the near future.

Android’s pKVM Becomes First Globally Certified Software to Achieve Prestigious SESIP Level 5 Security Certification

Posted by Dave Kleidermacher, VP Engineering, Android Security & Privacy

Today marks a watershed moment and new benchmark for open-source security and the future of consumer electronics. Google is proud to announce that protected KVM (pKVM), the hypervisor that powers the Android Virtualization Framework, has officially achieved SESIP Level 5 certification. This makes pKVM the first software security system designed for large-scale deployment in consumer electronics to meet this assurance bar.

Supporting Next-Gen Android Features

The implications for the future of secure mobile technology are profound. With this level of security assurance, Android is now positioned to securely support the next generation of high-criticality isolated workloads. This includes vital features, such as on-device AI workloads that can operate on ultra-personalized data, with the highest assurances of privacy and integrity.

This certification required a hands-on evaluation by Dekra, a globally recognized cybersecurity certification lab, which conducted an evaluation against the TrustCB SESIP scheme, compliant to EN-17927. Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access.

This certification is the cornerstone of the next-generation of Android’s multi-layered security strategy. Many of the TEEs (Trusted Execution Environments) used in the industry have not been formally certified or have only achieved lower levels of security assurance. This inconsistency creates a challenge for developers looking to build highly critical applications that require a robust and verifiable level of security. The certified pKVM changes this paradigm entirely. It provides a single, open-source, and exceptionally high-quality firmware base that all device manufacturers can build upon.

Looking ahead, Android device manufacturers will be required to use isolation technology that meets this same level of security for various security operations that the device relies on. Protected KVM ensures that every user can benefit from a consistent, transparent, and verifiably secure foundation.

A Collaborative Effort

This achievement represents just one important aspect of the immense, multi-year dedication from the Linux and KVM developer communities and multiple engineering teams at Google developing pKVM and AVF. We look forward to seeing the open-source community and Android ecosystem continue to build on this foundation, delivering a new era of high-assurance mobile technology for users.

Introducing OSS Rebuild: Open Source, Rebuilt to Last

Posted by Matthew Suozzo, Google Open Source Security Team (GOSST)

Today we're excited to announce OSS Rebuild, a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts. As supply chain attacks continue to target widely-used dependencies, OSS Rebuild gives security teams powerful data to avoid compromise without burden on upstream maintainers.


The project comprises:


  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.

  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.

  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.

  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.


Challenges

Open source software has become the foundation of our digital world. From critical infrastructure to everyday applications, OSS components now account for 77% of modern applications. With an estimated value exceeding $12 trillion, open source software has never been more integral to the global economy.


Yet this very ubiquity makes open source an attractive target: Recent high-profile supply chain attacks have demonstrated sophisticated methods for compromising widely-used packages. Each incident erodes trust in open ecosystems, creating hesitation among both contributors and consumers.


The security community has responded with initiatives like OpenSSF Scorecard, pypi's Trusted Publishers, and npm's native SLSA support. However, there is no panacea: Each effort targets a certain aspect of the problem, often making tradeoffs like shifting work onto publishers and maintainers.

Our Aim

Our aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata.


Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain.


Our vision extends beyond any single ecosystem: We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries—providing rebuild provenance for many of their most popular packages—is just the beginning of our journey.


How OSS Rebuild Works




Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression). Once we reproduce the package, we publish the build definition and outcome via SLSA Provenance. This attestation allows consumers to reliably verify a package's origin within the source history, understand and repeat its build process, and customize the build from a known-functional baseline (or maybe even use it to generate more detailed SBOMs).


With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention. Where automation isn't currently able to fully reproduce the package, we offer manual build specification so the whole community benefits from individual contributions.


And we are also excited at the potential for AI to help reproduce packages: Build and release processes are often described in natural language documentation which, while difficult to utilize with discrete logic, is increasingly useful to language models. Our initial experiments have demonstrated the approach's viability in automating exploration and testing, with limited human intervention, even in the most complex builds.


Our Capabilities

OSS Rebuild helps detect several classes of supply chain compromise:

  • Unsubmitted Source Code - When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.

  • Build Environment Compromise - By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.

  • Stealthy Backdoors - Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.

For enterprises and security professionals, OSS Rebuild can...

  • Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.

  • Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture.

  • Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions.


For publishers and maintainers of open source packages, OSS Rebuild can...

  • Strengthen package trust by providing consumers with independent verification of the packages' build integrity, regardless of the sophistication of the original build.

  • Retrofit historical packages' integrity with high-quality build attestations, regardless of whether build attestations were present or supported at the time of publication.

  • Reduce CI security-sensitivity allowing publishers to focus on core development work. CI platforms tend to have complex authorization and execution models and by performing separate rebuilds, the CI environment no longer needs to be load-bearing for your packages' security.


Check it out!

The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface. It can be compiled and installed easily:


$ go install github.com/google/oss-rebuild/cmd/oss-rebuild@latest


You can fetch OSS Rebuild's SLSA Provenance:

$ oss-rebuild get cratesio syn 2.0.39


..or explore the rebuilt versions of a particular package:


$ oss-rebuild list pypi absl-py

..or even rebuild the package for yourself:


$ oss-rebuild get npm lodash 4.17.20 --output=dockerfile | \

   docker run $(docker buildx build -q -)

Join Us in Helping Secure Open Source

OSS Rebuild is not just about fixing problems; it's about empowering end-users to make open source ecosystems more secure and transparent through collective action. If you're a developer, enterprise, or security researcher interested in OSS security, we invite you to follow along and get involved!


Advancing Protection in Chrome on Android

By: Google
Posted by David Adrian, Javier Castro & Peter Kotwicz, Chrome Security Team

Android recently announced Advanced Protection, which extends Google’s Advanced Protection Program to a device-level security setting for Android users that need heightened security—such as journalists, elected officials, and public figures. Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re better protected against the most sophisticated threats.

Advanced Protection acts as a single control point for at-risk users on Android that enables important security settings across applications, including many of your favorite Google apps, including Chrome. In this post, we’d like to do a deep dive into the Chrome features that are integrated with Advanced Protection, and how enterprises and users outside of Advanced Protection can leverage them.

Android Advanced Protection integrates with Chrome on Android in three main ways:

  • Enables the “Always Use Secure Connections” setting for both public and private sites, so that users are protected from attackers reading confidential data or injecting malicious content into insecure plaintext HTTP connections. Insecure HTTP represents less than 1% of page loads for Chrome on Android.
  • Enables full Site Isolation on mobile devices with 4GB+ RAM, so that potentially malicious sites are never loaded in the same process as legitimate websites. Desktop Chrome clients already have full Site Isolation.
  • Reduces attack surface by disabling Javascript optimizations, so that Chrome has a smaller attack surface and is harder to exploit.

Let’s take a look at all three, learn what they do, and how they can be controlled outside of Advanced Protection.

Always Use Secure Connections

“Always Use Secure Connections” (also known as HTTPS-First Mode in blog posts and HTTPS-Only Mode in the enterprise policy) is a Chrome setting that forces HTTPS wherever possible, and asks for explicit permission from you before connecting to a site insecurely. There may be attackers attempting to interpose on connections on any network, whether that network is a coffee shop, airport, or an Internet backbone. This setting protects users from these attackers reading confidential data and injecting malicious content into otherwise innocuous webpages. This is particularly useful for Advanced Protection users, since in 2023, plaintext HTTP was used as an exploitation vector during the Egyptian election.

Beyond Advanced Protection, we previously posted about how our goal is to eventually enable “Always Use Secure Connections” by default for all Chrome users. As we work towards this goal, in the last two years we have quietly been enabling it in more places beyond Advanced Protection, to help protect more users in risky situations, while limiting the number of warnings users might click through:

  • We added a new variant of the setting that only warns on public sites, and doesn’t warn on local networks or single-label hostnames (e.g. 192.168.0.1, shortlink/, 10.0.0.1). These names often cannot be issued a publicly-trusted HTTPS certificate. This variant protects against most threats—accessing a public website insecurely—but still allows for users to access local sites, which may be on a more trusted network, without seeing a warning.
  • We’ve automatically enabled “Always Use Secure Connections” for public sites in Incognito Mode for the last year, since Chrome 127 in June 2024.
  • We automatically prevent downgrades from HTTPS to plaintext HTTP on sites that Chrome knows you typically access over HTTPS (a heuristic version of the HSTS header), since Chrome 133 in January 2025.

Always Use Secure Connections has two modes—warn on insecure public sites, and warn on any insecure site.

Any user can enable “Always Use Secure Connections” in the Chrome Privacy and Security settings, regardless of if they’re using Advanced Protection. Users can choose if they would like to warn on any insecure site, or only insecure public sites. Enterprises can opt their fleet into either mode, and set exceptions using the HTTPSOnlyMode and HTTPAllowlist policies, respectively. Website operators should protect their users' confidentiality, ensure their content is delivered exactly as they intended, and avoid warnings, by deploying HTTPS.

Full Site Isolation

Site Isolation is a security feature in Chrome that isolates each website into its own rendering OS process. This means that different websites, even if loaded in a single tab of the same browser window, are kept completely separate from each other in memory. This isolation prevents a malicious website from accessing data or code from another website, even if that malicious website manages to exploit a vulnerability in Chrome’s renderer—a second bug to escape the renderer sandbox is required to access other sites. Site isolation improves security, but requires extra memory to have one process per site. Chrome Desktop isolates all sites by default. However, Android is particularly sensitive to memory usage, so for mobile Android form factors, when Advanced Protection is off, Chrome will only isolate a site if a user logs into that site, or if the user submits a form on that site. On Android devices with 4GB+ RAM in Advanced Protection (and on all desktop clients), Chrome will isolate all sites. Full Site Isolation significantly reduces the risk of cross-site data leakage for Advanced Protection users.

JavaScript Optimizations and Security

Advanced Protection reduces the attack surface of Chrome by disabling the higher-level optimizing Javascript compilers inside V8. V8 is Chrome’s high-performance Javascript and WebAssembly engine. The optimizing compilers in V8 make certain websites run faster, however they historically also have been a source of known exploitation of Chrome. Of all the patched security bugs in V8 with known exploitation, disabling the optimizers would have mitigated ~50%. However, the optimizers are why Chrome scores the highest on industry-wide benchmarks such as Speedometer. Disabling the optimizers blocks a large class of exploits, at the cost of causing performance issues for some websites.

Javascript optimizers can be disabled outside of Advanced Protection Mode via the “Javascript optimization & security” Site Setting. The Site Setting also enables users to disable/enable Javascript optimizers on a per-site basis. Disabling these optimizing compilers is not limited to Advanced Protection. Since Chrome 133, we’ve exposed this as a Site Setting that allows users to enable or disable the higher-level optimizing compilers on a per-site basis, as well as change the default.

Settings -> Privacy and Security -> Javascript optimization and security

This setting can be controlled by the DefaultJavaScriptOptimizerSetting enterprise policy, alongside JavaScriptOptimizerAllowedForSites and JavaScriptOptimizerBlockedForSites for managing the allowlist and denylist. Enterprises can use this policy to block access to the optimizer, while still allowlisting1 the SaaS vendors their employees use on a daily basis. It’s available on Android and desktop platforms

Chrome aims for the default configuration to be secure for all its users, and we’re continuing to raise the bar for V8 security in the default configuration by rolling out the V8 sandbox.

Protecting All Users

Billions of people use Chrome and Android, and not all of them have the same risk profile. Less sophisticated attacks by commodity malware can be very lucrative for attackers when done at scale, but so can sophisticated attacks on targeted users. This means that we cannot expect the security tradeoffs we make for the default configuration of Chrome to be suitable for everyone.

Advanced Protection, and the security settings associated with it, are a way for users with varying risk profiles to tailor Chrome to their security needs, either as an individual at-risk user. Enterprises with a fleet of managed Chrome installations can also enable the underlying settings now. Advanced Protection is available on Android 16 in Chrome 137+.

We additionally recommend at-risk users join the Advanced Protection Program with their Google accounts, which will require the account to use phishing-resistant multi-factor authentication methods and enable Advanced Protection on any of the user’s Android devices. We also recommend users enable automatic updates and always keep their Android phones and web browsers up to date.

Notes


  1. Allowlisting only works on platforms capable of full site isolation—any desktop platform and Android devices with 2GB+ RAM. This is because internally allowlisting is dependent on origin isolation

Mitigating prompt injection attacks with a layered defense strategy

Posted by Google GenAI Security Team

With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections. Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions. As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.


At Google, our teams have a longstanding precedent of investing in a defense-in-depth strategy, including robust evaluation, threat analysis, AI security best practices, AI red-teaming, adversarial training, and model hardening for generative AI tools. This approach enables safer adoption of Gemini in Google Workspace and the Gemini app (we refer to both in this blog as “Gemini” for simplicity). Below we describe our prompt injection mitigation product strategy based on extensive research, development, and deployment of improved security mitigations.


A layered security approach

Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker. This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources. 


Our model training with adversarial data significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models (technical details). This inherent model resilience is augmented with additional defenses that we built directly into Gemini, including: 


  1. Prompt injection content classifiers

  2. Security thought reinforcement

  3. Markdown sanitization and suspicious URL redaction

  4. User confirmation framework

  5. End-user security mitigation notifications


This layered approach to our security strategy strengthens the overall security framework for Gemini – throughout the prompt lifecycle and across diverse attack techniques.


1. Prompt injection content classifiers


Through collaboration with leading AI security researchers via Google's AI Vulnerability Reward Program (VRP), we've curated one of the world’s most advanced catalogs of generative AI vulnerabilities and adversarial data. Utilizing this resource, we built and are in the process of rolling out proprietary machine learning models that can detect malicious prompts and instructions within various formats, such as emails and files, drawing from real-world examples. Consequently, when users query Workspace data with Gemini, the content classifiers filter out harmful data containing malicious instructions, helping to ensure a secure end-to-end user experience by retaining only safe content. For example, if a user receives an email in Gmail that includes malicious instructions, our content classifiers help to detect and disregard malicious instructions, then generate a safe response for the user. This is in addition to built-in defenses in Gmail that automatically block more than 99.9% of spam, phishing attempts, and malware.


A diagram of Gemini’s actions based on the detection of the malicious instructions by content classifiers.


2. Security thought reinforcement


This technique adds targeted security instructions surrounding the prompt content to remind the large language model (LLM) to perform the user-directed task and ignore any adversarial instructions that could be present in the content. With this approach, we steer the LLM to stay focused on the task and ignore harmful or malicious requests added by a threat actor to execute indirect prompt injection attacks.

A diagram of Gemini’s actions based on additional protection provided by the security thought reinforcement technique. 


3. Markdown sanitization and suspicious URL redaction 


Our markdown sanitizer identifies external image URLs and will not render them, making the “EchoLeak” 0-click image rendering exfiltration vulnerability not applicable to Gemini. From there, a key protection against prompt injection and data exfiltration attacks occurs at the URL level. With external data containing dynamic URLs, users may encounter unknown risks as these URLs may be designed for indirect prompt injections and data exfiltration attacks. Malicious instructions executed on a user's behalf may also generate harmful URLs. With Gemini, our defense system includes suspicious URL detection based on Google Safe Browsing to differentiate between safe and unsafe links, providing a secure experience by helping to prevent URL-based attacks. For example, if a document contains malicious URLs and a user is summarizing the content with Gemini, the suspicious URLs will be redacted in Gemini’s response. 


Gemini in Gmail provides a summary of an email thread. In the summary, there is an unsafe URL. That URL is redacted in the response and is replaced with the text “suspicious link removed”. 


4. User confirmation framework


Gemini also features a contextual user confirmation system. This framework enables Gemini to require user confirmation for certain actions, also known as “Human-In-The-Loop” (HITL), using these responses to bolster security and streamline the user experience. For example, potentially risky operations like deleting a calendar event may trigger an explicit user confirmation request, thereby helping to prevent undetected or immediate execution of the operation.


The Gemini app with instructions to delete all events on Saturday. Gemini responds with the events found on Google Calendar and asks the user to confirm this action.


5. End-user security mitigation notifications


A key aspect to keeping our users safe is sharing details on attacks that we’ve stopped so users can watch out for similar attacks in the future. To that end, when security issues are mitigated with our built-in defenses, end users are provided with contextual information allowing them to learn more via dedicated help center articles. For example, if Gemini summarizes a file containing malicious instructions and one of Google’s prompt injection defenses mitigates the situation, a security notification with a “Learn more” link will be displayed for the user. Users are encouraged to become more familiar with our prompt injection defenses by reading the Help Center article


Gemini in Docs with instructions to provide a summary of a file. Suspicious content was detected and a response was not provided. There is a yellow security notification banner for the user and a statement that Gemini’s response has been removed, with a “Learn more” link to a relevant Help Center article.

Moving forward


Our comprehensive prompt injection security strategy strengthens the overall security framework for Gemini. Beyond the techniques described above, it also involves rigorous testing through manual and automated red teams, generative AI security BugSWAT events, strong security standards like our Secure AI Framework (SAIF), and partnerships with both external researchers via the Google AI Vulnerability Reward Program (VRP) and industry peers via the Coalition for Secure AI (CoSAI). Our commitment to trust includes collaboration with the security community to responsibly disclose AI security vulnerabilities, share our latest threat intelligence on ways we see bad actors trying to leverage AI, and offering insights into our work to build stronger prompt injection defenses. 


Working closely with industry partners is crucial to building stronger protections for all of our users. To that end, we’re fortunate to have strong collaborative partnerships with numerous researchers, such as Ben Nassi (Confidentiality), Stav Cohen (Technion), and Or Yair (SafeBreach), as well as other AI Security researchers participating in our BugSWAT events and AI VRP program. We appreciate the work of these researchers and others in the community to help us red team and refine our defenses.


We continue working to make upcoming Gemini models inherently more resilient and add additional prompt injection defenses directly into Gemini later this year. To learn more about Google’s progress and research on generative AI threat actors, attack techniques, and vulnerabilities, take a look at the following resources:


Sustaining Digital Certificate Security - Upcoming Changes to the Chrome Root Store

Posted by Chrome Root Program, Chrome Security Team

Note: Google Chrome communicated its removal of default trust of Chunghwa Telecom and Netlock in the public forum on May 30, 2025.

The Chrome Root Program Policy states that Certification Authority (CA) certificates included in the Chrome Root Store must provide value to Chrome end users that exceeds the risk of their continued inclusion. It also describes many of the factors we consider significant when CA Owners disclose and respond to incidents. When things don’t go right, we expect CA Owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement.

Chrome's confidence in the reliability of Chunghwa Telecom and Netlock as CA Owners included in the Chrome Root Store has diminished due to patterns of concerning behavior observed over the past year. These patterns represent a loss of integrity and fall short of expectations, eroding trust in these CA Owners as publicly-trusted certificate issuers trusted by default in Chrome. To safeguard Chrome’s users, and preserve the integrity of the Chrome Root Store, we are taking the following action.

Upcoming change in Chrome 139 and higher:

This approach attempts to minimize disruption to existing subscribers using a previously announced Chrome feature to remove default trust based on the SCTs in certificates.

Additionally, should a Chrome user or enterprise explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome Root Store (e.g., explicit trust is conveyed through a Group Policy Object on Windows), the SCT-based constraints described above will be overridden and certificates will function as they do today.

To further minimize risk of disruption, website operators are encouraged to review the “Frequently Asked Questions" listed below.

Why is Chrome taking action?

CAs serve a privileged and trusted role on the internet that underpin encrypted connections between browsers and websites. With this tremendous responsibility comes an expectation of adhering to reasonable and consensus-driven security and compliance expectations, including those defined by the CA/Browser Forum TLS Baseline Requirements.

Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified.

When will this action happen?

The action of Chrome, by default, no longer trusting new TLS certificates issued by these CAs will begin on approximately August 1, 2025, affecting certificates issued at that point or later.

This action will occur in Versions of Chrome 139 and greater on Windows, macOS, ChromeOS, Android, and Linux. Apple policies prevent the Chrome Certificate Verifier and corresponding Chrome Root Store from being used on Chrome for iOS.

What is the user impact of this action?

By default, Chrome users in the above populations who navigate to a website serving a certificate from Chunghwa Telecom or Netlock issued after July 31, 2025 will see a full page interstitial similar to this one.

Certificates issued by other CAs are not impacted by this action.

How can a website operator tell if their website is affected?

Website operators can determine if they are affected by this action by using the Chrome Certificate Viewer.

Use the Chrome Certificate Viewer

  • Navigate to a website (e.g., https://www.google.com)
  • Click the “Tune" icon
  • Click “Connection is Secure"
  • Click “Certificate is Valid" (the Chrome Certificate Viewer will open)
    • Website owner action is not required, if the “Organization (O)” field listed beneath the “Issued By" heading does not contain “Chunghwa Telecom" , “行政院” , “NETLOCK Ltd.”, or “NETLOCK Kft.”
    • Website owner action is required, if the “Organization (O)” field listed beneath the “Issued By" heading contains “Chunghwa Telecom" , “行政院” , “NETLOCK Ltd.”, or “NETLOCK Kft.”

What does an affected website operator do?

We recommend that affected website operators transition to a new publicly-trusted CA Owner as soon as reasonably possible. To avoid adverse website user impact, action must be completed before the existing certificate(s) expire if expiry is planned to take place after July 31, 2025.

While website operators could delay the impact of blocking action by choosing to collect and install a new TLS certificate issued from Chunghwa Telecom or Netlock before Chrome’s blocking action begins on August 1, 2025, website operators will inevitably need to collect and install a new TLS certificate from one of the many other CAs included in the Chrome Root Store.

Can I test these changes before they take effect?

Yes.

A command-line flag was added beginning in Chrome 128 that allows administrators and power users to simulate the effect of an SCTNotAfter distrust constraint as described in this blog post.

How to: Simulate an SCTNotAfter distrust

1. Close all open versions of Chrome

2. Start Chrome using the following command-line flag, substituting variables described below with actual values

--test-crs-constraints=$[Comma Separated List of Trust Anchor Certificate SHA256 Hashes]:sctnotafter=$[epoch_timestamp]

3. Evaluate the effects of the flag with test websites

Learn more about command-line flags here.

I use affected certificates for my internal enterprise network, do I need to do anything?

Beginning in Chrome 127, enterprises can override Chrome Root Store constraints like those described in this blog post by installing the corresponding root CA certificate as a locally-trusted root on the platform Chrome is running (e.g., installed in the Microsoft Certificate Store as a Trusted Root CA).

How do enterprises add a CA as locally-trusted?

Customer organizations should use this enterprise policy or defer to platform provider guidance for trusting root CA certificates.

What about other Google products?

Other Google product team updates may be made available in the future.

Tracking the Cost of Quantum Factoring

Posted by Craig Gidney, Quantum Research Scientist, and Sophie Schmieg, Senior Staff Cryptography Engineer 

Google Quantum AI's mission is to build best in class quantum computing for otherwise unsolvable problems. For decades the quantum and security communities have also known that large-scale quantum computers will at some point in the future likely be able to break many of today’s secure public key cryptography algorithms, such as Rivest–Shamir–Adleman (RSA). Google has long worked with the U.S. National Institute of Standards and Technology (NIST) and others in government, industry, and academia to develop and transition to post-quantum cryptography (PQC), which is expected to be resistant to quantum computing attacks. As quantum computing technology continues to advance, ongoing multi-stakeholder collaboration and action on PQC is critical.


In order to plan for the transition from today’s cryptosystems to an era of PQC, it's important the size and performance of a future quantum computer that could likely break current cryptography algorithms is carefully characterized. Yesterday, we published a preprint demonstrating that 2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019. Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits, and the National Institute of Standards and Technology (NIST) recently released standard PQC algorithms that are expected to be resistant to future large-scale quantum computers. However, this new result does underscore the importance of migrating to these standards in line with NIST recommended timelines


Estimated resources for factoring have been steadily decreasing

Quantum computers break RSA by factoring numbers, using Shor’s algorithm. Since Peter Shor published this algorithm in 1994, the estimated number of qubits needed to run it has steadily decreased. For example, in 2012, it was estimated that a 2048-bit RSA key could be broken by a quantum computer with a billion physical qubits. In 2019, using the same physical assumptions – which consider qubits with a slightly lower error rate than Google Quantum AI’s current quantum computers – the estimate was lowered to 20 million physical qubits.



Historical estimates of the number of physical qubits needed to factor 2048-bit RSA integers.


This result represents a 20-fold decrease compared to our estimate from 2019

The reduction in physical qubit count comes from two sources: better algorithms and better error correction – whereby qubits used by the algorithm ("logical qubits") are redundantly encoded across many physical qubits, so that errors can be detected and corrected.


On the algorithmic side, the key change is to compute an approximate modular exponentiation rather than an exact one. An algorithm for doing this, while using only small work registers, was discovered in 2024 by Chevignard and Fouque and Schrottenloher. Their algorithm used 1000x more operations than prior work, but we found ways to reduce that overhead down to 2x.


On the error correction side, the key change is tripling the storage density of idle logical qubits by adding a second layer of error correction. Normally more error correction layers means more overhead, but a good combination was discovered by the Google Quantum AI team in 2023. Another notable error correction improvement is using "magic state cultivation", proposed by the Google Quantum AI team in 2024, to reduce the workspace required for certain basic quantum operations. These error correction improvements aren't specific to factoring and also reduce the required resources for other quantum computations like in chemistry and materials simulation.


Security implications

NIST recently concluded a PQC competition that resulted in the first set of PQC standards. These algorithms can already be deployed to defend against quantum computers well before a working cryptographically relevant quantum computer is built. 


To assess the security implications of quantum computers, however, it’s instructive to additionally take a closer look at the affected algorithms (see here for a detailed look): RSA and Elliptic Curve Diffie-Hellman. As asymmetric algorithms, they are used for encryption in transit, including encryption for messaging services, as well as digital signatures (widely used to prove the authenticity of documents or software, e.g. the identity of websites). For asymmetric encryption, in particular encryption in transit, the motivation to migrate to PQC is made more urgent due to the fact that an adversary can collect ciphertexts, and later decrypt them once a quantum computer is available, known as a “store now, decrypt later” attack. Google has therefore been encrypting traffic both in Chrome and internally, switching to the standardized version of ML-KEM once it became available. Notably not affected is symmetric cryptography, which is primarily deployed in encryption at rest, and to enable some stateless services.


For signatures, things are more complex. Some signature use cases are similarly urgent, e.g., when public keys are fixed in hardware. In general, the landscape for signatures is mostly remarkable due to the higher complexity of the transition, since signature keys are used in many different places, and since these keys tend to be longer lived than the usually ephemeral encryption keys. Signature keys are therefore harder to replace and much more attractive targets to attack, especially when compute time on a quantum computer is a limited resource. This complexity likewise motivates moving earlier rather than later. To enable this, we have added PQC signature schemes in public preview in Cloud KMS. 


The initial public draft of the NIST internal report on the transition to post-quantum cryptography standards states that vulnerable systems should be deprecated after 2030 and disallowed after 2035. Our work highlights the importance of adhering to this recommended timeline.



More from Google on PQC: https://cloud.google.com/security/resources/post-quantum-cryptography?e=48754805 


What’s New in Android Security and Privacy in 2025

Posted by Dave Kleidermacher, VP Engineering, Android Security and Privacy

Android’s intelligent protections keep you safe from everyday dangers. Our dedication to your security is validated by security experts, who consistently rank top Android devices highest in security, and score Android smartphones, led by the Pixel 9 Pro, as leaders in anti-fraud efficacy.

Android is always developing new protections to keep you, your device, and your data safe. Today, we’re announcing new features and enhancements that build on our industry-leading protections to help keep you safe from scams, fraud, and theft on Android.

Smarter protections against phone call scams

Our research shows that phone scammers often try to trick people into performing specific actions to initiate a scam, like changing default device security settings or granting elevated permissions to an app. These actions can result in spying, fraud, and other abuse by giving an attacker deeper access to your device and data. To combat phone scammers, we’re working to block specific actions and warn you of these sophisticated attempts. This happens completely on device and is applied only with conversations with non-contacts.

Android’s new in-call protections1 provide an additional layer of defense, preventing you from taking risky security actions during a call like:

  • Disabling Google Play Protect, Android’s built-in security protection, that is on by default and continuously scans for malicious app behavior, no matter the download source.
  • Sideloading an app for the first time from a web browser, messaging app or other source – which may not have been vetted for security and privacy by Google.
  • Granting accessibility permissions, which can give a newly downloaded malicious app access to gain control over the user's device and steal sensitive/private data, like banking information.

And if you’re screen sharing during a phone call, Android will now automatically prompt you to stop sharing at the end of a call. These protections help safeguard you against scammers that attempt to gain access to sensitive information to conduct fraud.

Piloting enhanced in-call protection for banking apps


Screen sharing scams are becoming quite common, with fraudsters often impersonating banks, government agencies, and other trusted institutions – using screen sharing to guide users to perform costly actions such as mobile banking transfers. To better protect you from these attacks, we’re piloting new in-call protections for banking apps, starting in the UK.

When you launch a participating banking app while screen sharing with an unknown contact, your Android device will warn you about the potential dangers and give you the option to end the call and to stop screen sharing with one tap.

This feature will be enabled automatically for participating banking apps whenever you're on a phone call with an unknown contact on Android 11+ devices. We are working with UK banks Monzo, NatWest and Revolut to pilot this feature for their customers in the coming weeks and will assess the results of the pilot ahead of a wider roll out.


Making real-time Scam Detection in Google Messages even more intelligent


We recently launched AI-powered Scam Detection in Google Messages and Phone by Google to protect you from conversational scams that might sound innocent at first, but turn malicious and can lead to financial loss or data theft. When Scam Detection discovers a suspicious conversation pattern, it warns you in real-time so you can react before falling victim to a costly scam.

AI-powered Scam Detection is always improving to help keep you safe while also keeping your privacy in mind. With Google’s advanced on-device AI, your conversations stay private to you. All message processing remains on-device and you’re always in control. You can turn off Spam Protection, which includes Scam Detection, in your Google Messages at any time.

Prior to targeting conversational scams, Scam Detection in Google Messages focused on analyzing and detecting package delivery and job seeking scams. We’ve now expanded our detections to help protect you from a wider variety of sophisticated scams including:

  • Toll road and other billing fee scams
  • Crypto scams
  • Financial impersonation scams
  • Gift card and prize scams
  • Technical support scams
  • And more
These enhancements apply to all Google Messages users.

Fighting fraud and impersonation with Key Verifier

To help protect you from scammers who try to impersonate someone you know, we’re launching a helpful tool called Key Verifier. The feature allows you and the person you’re messaging to verify the identity of the other party through public encryption keys, protecting your end-to-end encrypted messages in Google Messages. By verifying contact keys in your Google Contacts app (through a QR code scanning or number comparison), you can have an extra layer of assurance that the person on the other end is genuine and that your conversation is private with them.

Key Verifier provides a visual way for you and your contact to quickly confirm that your secret keys match, strengthening your confidence that you’re communicating with the intended recipient and not a scammer. For example, if an attacker gains access to a friend’s phone number and uses it on another device to send you a message – which can happen as a result of a SIM swap attack – their contact's verification status will be marked as no longer verified in the Google Contacts app, suggesting your friend’s account may be compromised or has been changed. Key Verifier will launch later this summer in Google Messages on Android 10+ devices.

Comprehensive mobile theft protection, now even stronger


Physical device theft can lead to financial fraud and data theft, with the value of your banking and payment information many times exceeding the value of your phone. This is one of the reasons why last year we launched the mobile industry’s most comprehensive suite of theft protection features to protect you before, during, and after a theft. Since launch, our theft protection features have helped protect data on hundreds of thousands of devices that may have fallen into the wrong hands. This includes devices that were locked by Remote Lock or Theft Detection Lock and remained locked for over 48 hours.

Most recently, we launched Identity Check for Pixel and Samsung One UI 7 devices, providing an extra layer of security even if your PIN or password is compromised. This protection will also now be available from more device manufacturers on supported devices that upgrade to Android 16.

Coming later this year, we’re further hardening Factory Reset protections, which will restrict all functionalities on devices that are reset without the owner’s authorization. You'll also gain more control over our Remote Lock feature with the addition of a security challenge question, helping to prevent unauthorized actions.

We’re also enhancing your security against thieves in Android 16 by providing more protection for one-time passwords that are received when your phone is locked. In higher risk scenarios2, Android will hide one-time passwords on your lock screen, ensuring that only you can see them after unlocking your device.

Advanced Protection: Google’s strongest security for mobile devices

Protecting users who need heightened security has been a long-standing commitment at Google, which is why we have our Advanced Protection Program that provides Google’s strongest protections against targeted attacks.

To enhance these existing device defenses, Android 16 extends Advanced Protection with a device-level security setting for Android users. Whether you’re an at-risk individual – such as a journalist, elected official, or public figure – or you just prioritize security, Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re protected against the most sophisticated threats.

Advanced Protection is available on devices with Android 16. Learn more in our blog.

More intelligent defenses against bad apps with Google Play Protect

One way malicious developers try to trick people is by hiding or changing their app icon, making unsafe apps more difficult to find and remove. Now, Google Play Protect live threat detection will catch apps and alert you when we detect this deceptive behavior. This feature will be available to Google Pixel 6+ and a selection of new devices from other manufacturers in the coming months.

Google Play Protect always checks each app before it gets installed on your device, regardless of the install source. It conducts real-time scanning of an app, enhanced by on-device machine learning, when users try to install an app that has never been seen by Google Play Protect to help detect emerging threats.

We’ve made Google Play Protect’s on-device capabilities smarter to help us identify more malicious applications even faster to keep you safe. Google Play Protect now uses a new set of on-device rules to specifically look for text or binary patterns to quickly identify malware families. If an app shows these malicious patterns, we can alert you before you even install it. And to keep you safe from new and emerging malware and their variants, we will update these rules frequently for better classification over time.

This update to Google Play Protect is now available globally for all Android users with Google Play services.

Always advancing Android security


In addition to new features that come in numbered Android releases, we're constantly enhancing your protection on Android through seamless Google Play services updates and other improvements, ensuring you benefit from the latest security advancements continuously. This allows us to rapidly deploy critical defenses and keep you ahead of emerging threats, making your Android experience safer every day.

Through close collaboration with our partners across the Android ecosystem and the broader security community, we remain focused on bringing you security enhancements and innovative new features to help keep you safe.

Notes


  1. In-call protection for disabling Google Play Protect is available on Android 6+ devices. Protections for sideloading an app and turning on accessibility permissions are available on Android 16 devices. 

  2. When a user’s device is not connected to Wi-Fi and has not been recently unlocked 

Advanced Protection: Google’s Strongest Security for Mobile Devices

Posted by Il-Sung Lee, Group Product Manager, Android Security

Protecting users who need heightened security has been a long-standing commitment at Google, which is why we have our Advanced Protection Program that provides Google’s strongest protections against targeted attacks.

To enhance these existing device defenses, Android 16 extends Advanced Protection with a device-level security setting for Android users. Whether you’re an at-risk individual – such as a journalist, elected official, or public figure – or you just prioritize security, Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re protected against the most sophisticated threats.

Simple to activate, powerful in protection

Advanced Protection ensures all of Android's highest security features are enabled and are seamlessly working together to safeguard you against online attacks, harmful apps, and data risks.

Advanced Protection activates a powerful array of security features, combining new capabilities with pre-existing ones that have earned top ratings in security comparisons, all designed to protect your device across several critical areas.

We're also introducing innovative, Android-specific features, such as Intrusion Logging. This industry-first feature securely backs up device logs in a privacy-preserving and tamper-resistant way, accessible only to the user. These logs enable a forensic analysis if a device compromise is ever suspected.

Advanced Protection gives users:

  • Best-in-class protection, minimal disruption: Advanced Protection gives users the option to equip their devices with Android’s most effective security features for proactive defense, with a user-friendly and low-friction experience.
  • Easy activation: Advanced Protection makes security easy and accessible. You don’t need to be a security expert to benefit from enhanced security.
  • Defense-in-depth: Once a user turns on Advanced Protection, the system prevents accidental or malicious disablement of the individual security features under the Advanced Protection umbrella. This reflects a "defense-in-depth" strategy, where multiple security layers work together.
  • Seamless security integration with apps: Advanced Protection acts as a single control point that enables important security settings across many of your favorite Google apps, including Chrome, Google Message, and Phone by Google. Advanced Protection will also incorporate third-party applications that choose to integrate in the future.

How your Android device becomes fortified with Advanced Protection

Advanced Protection manages the following existing and new security features for your device, ensuring they are activated and cannot be disabled across critical protection areas:

Continuously evolving Advanced Protection

With the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features. Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year.

We are committed to continuously expanding the security and privacy capabilities within Advanced Protection, so users can benefit from the best of Android’s powerful security features.

Using AI to stop tech support scams in Chrome

By: Google
Posted by Jasika Bawa, Andy Lim, and Xinghui Lu, Google Chrome Security

Tech support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data. In a tech support scam, the goal of the scammer is to trick you into believing your computer has a serious problem, such as a virus or malware infection, and then convince you to pay for unnecessary services, software, or grant them remote access to your device. Tech support scams on the web often employ alarming pop-up warnings mimicking legitimate security alerts. We've also observed them to use full-screen takeovers and disable keyboard and mouse input to create a sense of crisis.

Chrome has always worked with Google Safe Browsing to help keep you safe online. Now, with this week's launch of Chrome 137, Chrome will offer an additional layer of protection using the on-device Gemini Nano large language model (LLM). This new feature will leverage the LLM to generate signals that will be used by Safe Browsing in order to deliver higher confidence verdicts about potentially dangerous sites like tech support scams.

Initial research using LLMs has shown that they are relatively effective at understanding and classifying the varied, complex nature of websites. As such, we believe we can leverage LLMs to help detect scams at scale and adapt to new tactics more quickly. But why on-device? Leveraging LLMs on-device allows us to see threats when users see them. We’ve found that the average malicious site exists for less than 10 minutes, so on-device protection allows us to detect and block attacks that haven't been crawled before. The on-device approach also empowers us to see threats the way users see them. Sites can render themselves differently for different users, often for legitimate purposes (e.g. to account for device differences, offer personalization, provide time-sensitive content), but sometimes for illegitimate purposes (e.g. to evade security crawlers) – as such, having visibility into how sites are presenting themselves to real users enhances our ability to assess the web.

How it works

At a high level, here's how this new layer of protection works.

Overview of how on-device LLM assistance in mitigating scams works

When a user navigates to a potentially dangerous page, specific triggers that are characteristic of tech support scams (for example, the use of the keyboard lock API) will cause Chrome to evaluate the page using the on-device Gemini Nano LLM. Chrome provides the LLM with the contents of the page that the user is on and queries it to extract security signals, such as the intent of the page. This information is then sent to Safe Browsing for a final verdict. If Safe Browsing determines that the page is likely to be a scam based on the LLM output it receives from the client, in addition to other intelligence and metadata about the site, Chrome will show a warning interstitial.

This is all done in a way that preserves performance and privacy. In addition to ensuring that the LLM is only triggered sparingly and run locally on the device, we carefully manage resource consumption by considering the number of tokens used, running the process asynchronously to avoid interrupting browser activity, and implementing throttling and quota enforcement mechanisms to limit GPU usage. LLM-summarized security signals are only sent to Safe Browsing for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome, giving them protection against threats Google may not have seen before. Standard Protection users will also benefit indirectly from this feature as we add newly discovered dangerous sites to blocklists.

Future considerations

The scam landscape continues to evolve, with bad actors constantly adapting their tactics. Beyond tech support scams, in the future we plan to use the capabilities described in this post to help detect other popular scam types, such as package tracking scams and unpaid toll scams. We also plan to utilize the growing power of Gemini to extract additional signals from website content, which will further enhance our detection capabilities. To protect even more users from scams, we are working on rolling out this feature to Chrome on Android later this year. And finally, we are collaborating with our research counterparts to explore solutions to potential exploits such as prompt injection in content and timing bypass.

Google announces Sec-Gemini v1, a new experimental cybersecurity model

Posted by Elie Burzstein and Marianna Tishchenko, Sec-Gemini team



Today, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers. 



As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before.


 

Effectively powering SecOps workflows requires state-of-the-art reasoning capabilities and extensive current cybersecurity knowledge. Sec-Gemini v1 achieves this by combining Gemini’s advanced capabilities with near real-time cybersecurity knowledge and tooling. This combination allows it to achieve superior performance on key cybersecurity workflows, including incident root cause analysis, threat analysis, and vulnerability impact understanding.



We firmly believe that successfully pushing AI cybersecurity frontiers to decisively tilt the balance in favor of the defenders requires a strong collaboration across the cybersecurity community. This is why we are making Sec-Gemini v1 freely available to select organizations, institutions, professionals, and NGOs for research purposes.



Sec-Gemini v1 outperforms other models on key cybersecurity benchmarks as a result of its advanced integration of Google Threat Intelligence (GTI), OSV, and other key data sources. Sec-Gemini v1 outperforms other models on CTI-MCQ, a leading threat intelligence benchmark, by at least 11% (See Figure 1). It also outperforms other models by at least 10.5% on the CTI-Root Cause Mapping benchmark (See Figure 2):





Figure 1: Sec-Gemini v1 outperforms other models on the CTI-MCQ Cybersecurity Threat Intelligence benchmark.







Figure 2: Sec-Gemini v1 has outperformed other models in a Cybersecurity Threat Intelligence-Root Cause Mapping (CTI-RCM) benchmark that evaluates an LLM's ability to understand the nuances of vulnerability descriptions, identify vulnerabilities underlying root causes, and accurately classify them according to the CWE taxonomy.




Below is an example of the comprehensiveness of Sec-Gemini v1’s answers in response to key cybersecurity questions. First, Sec-Gemini v1 is able to determine that Salt Typhoon is a threat actor (not all models do) and provides a comprehensive description of that threat actor, thanks to its deep integration with Mandiant Threat intelligence data.









Next, in response to a question about the vulnerabilities in the Salt Typhoon description, Sec-Gemini v1 outputs not only vulnerability details (thanks to its integration with OSV data, the open-source vulnerabilities database operated by Google), but also contextualizes the vulnerabilities with respect to threat actors (using Mandiant data). With Sec-Gemini v1, analysts can understand the risk and threat profile associated with specific vulnerabilities faster.








If you are interested in collaborating with us on advancing the AI cybersecurity frontier, please request early access to Sec-Gemini v1 via this form.








Taming the Wild West of ML: Practical Model Signing with Sigstore

Posted by Mihai Maruseac, Google Open Source Security Team (GOSST)


In partnership with NVIDIA and HiddenLayer, as part of the Open Source Security Foundation, we are now launching the first stable version of our model signing library. Using digital signatures like those from Sigstore, we allow users to verify that the model used by the application is exactly the model that was created by the developers. In this blog post we will illustrate why this release is important from Google’s point of view.



With the advent of LLMs, the ML field has entered an era of rapid evolution. We have seen remarkable progress leading to weekly launches of various applications which incorporate ML models to perform tasks ranging from customer support, software development, and even performing security critical tasks.



However, this has also opened the door to a new wave of security threats. Model and data poisoning, prompt injection, prompt leaking and prompt evasion are just a few of the risks that have recently been in the news. Garnering less attention are the risks around the ML supply chain process: since models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: “can I trust this model?”



Since its launch, Google’s Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing. 



The ML supply chain

To understand the need for the model signing project, let’s look at the way ML powered applications are developed, with an eye to where malicious tampering can occur.



Applications that use advanced AI models are typically developed in at least three different stages. First, a large foundation model is trained on large datasets. Next, a separate ML team finetunes the model to make it achieve good performance on application specific tasks. Finally,  this fine-tuned model is embedded into an application.



The three steps involved in building an application that uses large language models.



These three stages are usually handled by different teams, and potentially even different companies, since each stage requires specialized expertise. To make models available from one stage to the next, practitioners leverage model hubs, which are repositories for storing models. Kaggle and HuggingFace are popular open source options, although internal model hubs could also be used.



This separation into stages creates multiple opportunities where a malicious user (or external threat actor who has compromised the internal infrastructure) could tamper with the model. This could range from just a slight alteration of the model weights that control model behavior, to injecting architectural backdoors — completely new model behaviors and capabilities that could be triggered only on specific inputs. It is also possible to exploit the serialization format and inject arbitrary code execution in the model as saved on disk — our whitepaper on AI supply chain integrity goes into more details on how popular model serialization libraries could be exploited. The following diagram summarizes the risks across the ML supply chain for developing a single model, as discussed in the whitepaper.



The supply chain diagram for building a single model, illustrating some supply chain risks (oval labels) and where model signing can defend against them (check marks)



The diagram shows several places where the model could be compromised. Most of these could be prevented by signing the model during training and verifying integrity before any usage, in every step: the signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model.



Sigstore for ML models

Signing models is inspired by code signing, a critical step in traditional software development. A signed binary artifact helps users identify its producer and prevents tampering after publication. The average developer, however, would not want to manage keys and rotate them on compromise.



These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore’s signing mechanism as the default approach for signing ML models.



Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.



Future goals

We can view model signing as establishing the foundation of trust in the ML ecosystem. We envision extending this approach to also include datasets and other ML-related artifacts. Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world. In an ideal world, an ML developer would not need to perform any code changes to the training code, while the framework itself would handle model signing and verification in a transparent manner.



If you are interested in the future of this project, join the OpenSSF meetings attached to the project. To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

New security requirements adopted by HTTPS certificate industry

By: Google
Posted by Chrome Root Program, Chrome Security Team

The Chrome Root Program launched in 2022 as part of Google’s ongoing commitment to upholding secure and reliable network connections in Chrome. We previously described how the Chrome Root Program keeps users safe, and described how the program is focused on promoting technologies and practices that strengthen the underlying security assurances provided by Transport Layer Security (TLS). Many of these initiatives are described on our forward looking, public roadmap named “Moving Forward, Together.

At a high-level, “Moving Forward, Together” is our vision of the future. It is non-normative and considered distinct from the requirements detailed in the Chrome Root Program Policy. It’s focused on themes that we feel are essential to further improving the Web PKI ecosystem going forward, complementing Chrome’s core principles of speed, security, stability, and simplicity. These themes include:

  • Encouraging modern infrastructures and agility
  • Focusing on simplicity
  • Promoting automation
  • Reducing mis-issuance
  • Increasing accountability and ecosystem integrity
  • Streamlining and improving domain validation practices
  • Preparing for a "post-quantum" world

Earlier this month, two “Moving Forward, Together” initiatives became required practices in the CA/Browser Forum Baseline Requirements (BRs). The CA/Browser Forum is a cross-industry group that works together to develop minimum requirements for TLS certificates. Ultimately, these new initiatives represent an improvement to the security and agility of every TLS connection relied upon by Chrome users.

If you’re unfamiliar with HTTPS and certificates, see the “Introduction” of this blog post for a high-level overview.

Multi-Perspective Issuance Corroboration

Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence has been published by the certificate requestor.

Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy (CITP) of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.

Multi-Perspective Issuance Corroboration (referred to as "MPIC") enhances existing domain control validation methods by reducing the likelihood that routing attacks can result in fraudulently issued certificates. Rather than performing domain control validation and authorization from a single geographic or routing vantage point, which an adversary could influence as demonstrated by security researchers, MPIC implementations perform the same validation from multiple geographic locations and/or Internet Service Providers. This has been observed as an effective countermeasure against ethically conducted, real-world BGP hijacks.

The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations.

We’d especially like to thank Henry Birge-Lee, Grace Cimaszewski, Liang Wang, Cyrill Krähenbühl, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford, and others from Princeton University for their sustained efforts in promoting meaningful web security improvements and ongoing partnership.

Linting

Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication.

Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security. Linting improves interoperability and helps CAs reduce the risk of non-compliance with industry standards (e.g., CA/Browser Forum TLS Baseline Requirements). Non-compliance can result in certificates being "mis-issued". Detecting these issues before a certificate is in use by a site operator reduces the negative impact associated with having to correct a mis-issued certificate.

There are numerous open-source linting projects in existence (e.g., certlint, pkilint, x509lint, and zlint), in addition to numerous custom linting projects maintained by members of the Web PKI ecosystem. “Meta” linters, like pkimetal, combine multiple linting tools into a single solution, offering simplicity and significant performance improvements to implementers compared to implementing multiple standalone linting solutions.

Last spring, the Chrome Root Program led ecosystem-wide experiments, emphasizing the need for linting adoption due to the discovery of widespread certificate mis-issuance. We later participated in drafting CA/Browser Forum Ballot SC-075 to require adoption of certificate linting. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.

What’s next?

We recently landed an updated version of the Chrome Root Program Policy that further aligns with the goals outlined in “Moving Forward, Together.” The Chrome Root Program remains committed to proactive advancement of the Web PKI. This commitment was recently realized in practice through our proposal to sunset demonstrated weak domain control validation methods permitted by the CA/Browser Forum TLS Baseline Requirements. The weak validation methods in question are now prohibited beginning July 15, 2025.

It’s essential we all work together to continually improve the Web PKI, and reduce the opportunities for risk and abuse before measurable harm can be realized. We continue to value collaboration with web security professionals and the members of the CA/Browser Forum to realize a safer Internet. Looking forward, we’re excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography. We’ll have more to say about quantum-resistant PKI later this year.

❌