The banking trojan, which targeted mostly Brazil, Mexico and Spain, blocked the victim’s screen, logged keystrokes, simulated mouse and keyboard activity and displayed fake pop-up windows


This week, the United Nations convened member states to continue its years-long negotiations on the UN Cybercrime Treaty, titled “Countering the Use of Information and Communications Technologies for Criminal Purposes.” 



As more aspects of our lives intersect with the digital sphere, law enforcement around the world has increasingly turned to electronic evidence to investigate and disrupt criminal activity. Google takes the threat of cybercrime very seriously, and dedicates significant resources to combating it. When governments send Google legal orders to disclose user data in connection with their investigations, we carefully review those orders to make sure they satisfy applicable laws, international norms, and Google’s policies. We also regularly report the number of these orders in our Transparency Report



To ensure that transnational legal demands are issued consistent with rule of law, we have long called for an international framework for digital evidence that includes robust due process protections, respects human rights (including the right to free expression), and aligns with existing international norms. This is particularly important in the case of transnational criminal investigations, where the legal protections  in one jurisdiction may not align with those in others. 



Such safeguards aren’t just important to ensuring free expression and human rights, they are also critical to protecting web security. Too often, as we know well from helping stand up the Security Researcher Legal Defense Fund, individuals working to advance cybersecurity for the public good end up facing criminal charges. The Cybercrime Treaty should not criminalize the work of legitimate cybersecurity researchers and penetration testers, which is designed to protect individual systems and  the web as a whole.  



UN Member States have an opportunity to strengthen global cybersecurity by adopting a treaty that encourages the criminalization of the most egregious and systemic activities — on which all parties can agree — while adopting a framework for sharing digital evidence that is transparent, grounded in the rule of law, based on pre-existing international frameworks like the Universal Declaration on Human Rights, and aligned with principles of necessity and proportionality. At the same time, Member States should avoid attempts to criminalize activities that raise significant freedom of expression issues, or that actually undercut the treaty’s goal of reducing cybercrime. That will require strengthening critical guardrails and protections.  



We urge Member States to heed calls from civil society groups to address critical gaps in the Treaty and revise the text to protect users and security professionals — not endanger the security of the web.  

ESET researchers discovered several Android apps carrying VajraSpy, a RAT used by the Patchwork APT group

The AI world moves fast, so we’ve been hard at work keeping security apace with recent advancements. One of our approaches, in alignment with Google’s Safer AI Framework (SAIF), is using AI itself to automate and streamline routine and manual security tasks, including fixing security bugs. Last year we wrote about our experiences using LLMs to expand vulnerability testing coverage, and we’re excited to share some updates. 



Today, we’re releasing our fuzzing framework as a free, open source resource that researchers and developers can use to improve fuzzing’s bug-finding abilities. We’ll also show you how we’re using AI to speed up the bug patching process. By sharing these experiences, we hope to spark new ideas and drive innovation for a stronger ecosystem security.



Update: AI-powered vulnerability discovery

Last August, we announced our framework to automate manual aspects of fuzz testing (“fuzzing”) that often hindered open source maintainers from fuzzing their projects effectively. We used LLMs to write project-specific code to boost fuzzing coverage and find more vulnerabilities. Our initial results on a subset of projects in our free OSS-Fuzz service were very promising, with code coverage increased by 30% in one example. Since then, we’ve expanded our experiments to more than 300 OSS-Fuzz C/C++ projects, resulting in significant coverage gains across many of the project codebases. We’ve also improved our prompt generation and build pipelines, which has increased code line coverage by up to 29% in 160 projects. 

How does that translate to tangible security improvements? So far, the expanded fuzzing coverage offered by LLM-generated improvements allowed OSS-Fuzz to discover two new vulnerabilities in cJSON and libplist, two widely used projects that had already been fuzzed for years. As always, we reported the vulnerabilities to the project maintainers for patching. Without the completely LLM-generated code, these two vulnerabilities could have remained undiscovered and unfixed indefinitely. 

And more: AI-powered vulnerability fixing

Fuzzing is fantastic for finding bugs, but for security to improve, those bugs also need to be patched. It’s long been an industry-wide struggle to find the engineering hours needed to patch open bugs at the pace that they are uncovered, and triaging and fixing bugs is a significant manual toll on project maintainers. With continued improvements in using LLMs to find more bugs, we need to keep pace in creating similarly automated solutions to help fix those bugs. We recently announced an experiment doing exactly that: building an automated pipeline that intakes vulnerabilities (such as those caught by fuzzing), and prompts LLMs to generate fixes and test them before selecting the best for human review.



This AI-powered patching approach resolved 15% of the targeted bugs, leading to significant time savings for engineers. The potential of this technology should apply to most or all categories throughout the software development process. We’re optimistic that this research marks a promising step towards harnessing AI to help ensure more secure and reliable software.



Try it out

Since we’ve now open sourced our framework to automate manual aspects of fuzzing, any researcher or developer can experiment with their own prompts to test the effectiveness of fuzz targets generated by LLMs (including Google’s VertexAI or their own fine-tuned models) and measure the results against OSS-Fuzz C/C++ projects. We also hope to encourage research collaborations and to continue seeing other work inspired by our approach, such as Rust fuzz target generation

If you’re interested in using LLMs to patch bugs, be sure to read our paper on building an AI-powered patching pipeline. You’ll find a summary of our own experiences, some unexpected data about LLM’s abilities to patch different types of bugs, and guidance for building pipelines in your own organizations. 

An AI chatbot inadvertently kindles a cybercrime boom, ransomware bandits plunder organizations without deploying ransomware, and a new botnet enslaves Android TV boxes


Helping Pixel owners upgrade to the easier, safer way to sign in

Your phone contains a lot of your personal information, from financial data to photos. Pixel phones are designed to help protect you and your data, and make security and privacy as easy as possible. This is why the Pixel team has been especially excited about passkeys—the easier, safer alternative to passwords.

Passkeys are safer because they’re unique to each account, and are more resistant against online attacks such as phishing. They’re easier to use because there’s nothing for you to remember: when it’s time to sign in, using a passkey is as simple as unlocking your device with your face or fingerprint, or your PIN/pattern/password.

Google is working to accelerate passkey adoption. We’ve launched support for passkeys on Google platforms such as Android and Chrome, and recently we announced that we’re making passkeys a default option across personal Google Accounts. We’re also working with our partners across the industry to make passkeys available on more websites and apps.

Recently, we took things a step further. As part of last December’s Pixel Feature Drop, we introduced a new feature to Google Password Manager: passkey upgrades. With this new feature, Google Password Manager will let you discover which of your accounts support passkeys, and help you upgrade with just a few taps.

This new passkey upgrade experience is now available on Pixel phones (starting from Pixel 5a) as well as Pixel Tablet. Google Password manager will incorporate these updates for other platforms in the future.

Best of all, today we’re happy to announce that we’ve teamed up with Adobe, Best Buy, DocuSign, eBay, Kayak, Money Forward, Nintendo, PayPal, Uber, Yahoo! Japan—and soon, TikTok as well, to help bring you this easy passkey upgrade experience and usher you into the passwordless future.

If you have an account with one of these early launch partners, Google Password Manager on Pixel will helpfully guide you to the exact location on the partner’s website or app where you can upgrade to a passkey. There’s no need to manually hunt for the option in account settings.

And because the technology that makes this possible is open, any website or app, as well as any other password manager, can leverage it to help their users upgrade to passkeys for supporting accounts. It’s all part of Google’s commitment to help make signing in easier and safer.

ESET provided technical analysis, statistical information, known C&C servers and was able to get a glimpse of the victimology

In today’s digitally interconnected world, advanced cyber capabilities have become an exceptionally potent and versatile tool of tradecraft for nation-states and criminals alike

Blindly trusting your partners and suppliers on their security posture is not sustainable – it’s time to take control through effective supplier risk management

ESET researchers have discovered NSPX30, a sophisticated implant used by a new China-aligned APT group we have named Blackwood