The US Federal Trade Commission received 1.4 million reports of identity theft last year, double the number from 2019

The post Identity theft spikes amid pandemic appeared first on WeLiveSecurity



Executive Summary:

The security of open source software has rightfully garnered the industry’s attention, but solutions require consensus about the challenges and cooperation in the execution. The problem is complex and there are many facets to cover: supply chain, dependency management, identity, and build pipelines. Solutions come faster when the problem is well-framed; we propose a framework (“Know, Prevent, Fix”) for how the industry can think about vulnerabilities in open source and concrete areas to address first, including:

  • Consensus on metadata and identity standards: We need consensus on fundamentals to tackle these complex problems as an industry. Agreements on metadata details and identities will enable automation, reduce the effort required to update software, and minimize the impact of vulnerabilities.
  • Increased transparency and review for critical software: For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.

The following framework and goals are proposed with the intention of sparking industry-wide discussion and progress on the security of open source software.


Due to recent events, the software world gained a deeper understanding about the real risk of supply-chain attacks. Open source software should be less risky on the security front, as all of the code and dependencies are in the open and available for inspection and verification. And while that is generally true, it assumes people are actually looking. With so many dependencies, it is impractical to monitor them all, and many open source packages are not well maintained.

It is common for a program to depend, directly or indirectly, on thousands of packages and libraries. For example, Kubernetes now depends on about 1,000 packages. Open source likely makes more use of dependencies than closed source, and from a wider range of suppliers; the number of distinct entities that need to be trusted can be very high. This makes it extremely difficult to understand how open source is used in products and what vulnerabilities might be relevant. There is also no assurance that what is built matches the source code.

Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional—honest errors made by well-intentioned developers. Furthermore, bad actors are more likely to exploit known vulnerabilities than to find their own: it’s just easier. As such, we must focus on making fundamental changes to address the majority of vulnerabilities, as doing so will move the entire industry far along in addressing the complex cases as well, including supply-chain attacks.

Few organizations can verify all of the packages they use, let alone all of the updates to those packages. In the current landscape, tracking these packages takes a non-trivial amount of infrastructure, and significant manual effort. At Google, we have those resources and go to extraordinary lengths to manage the open source packages we use—including keeping a private repo of all open source packages we use internally—and it is still challenging to track all of the updates. The sheer flow of updates is daunting. A core part of any solution will be more automation, and this will be a key theme for our open source security work in 2021 and beyond.

Because this is a complex problem that needs industry cooperation, our purpose here is to focus the conversation around concrete goals. Google co-founded the OpenSSF to be a focal point for this collaboration, but to make progress, we need participation across the industry, and agreement on what the problems are and how we might address them. To get the discussion started, we present one way to frame this problem, and a set of concrete goals that we hope will accelerate industry-wide solutions.

We suggest framing the challenge as three largely independent problem areas, each with concrete objectives:

  1. Know about the vulnerabilities in your software
  2. Prevent the addition of new vulnerabilities, and
  3. Fix or remove vulnerabilities.

A related but separate problem, which is critical to securing the supply chain, is improving the security of the development process. We’ve outlined the challenges of this problem and proposed goals in the fourth section, Prevention for Critical Software.

Know your Vulnerabilities

Knowing your vulnerabilities is harder than expected for many reasons. Although there are mechanisms for reporting vulnerabilities, it is hard to know if they actually affect the specific versions of software you are using.

Goal: Precise Vulnerability Data

First, it is crucial to capture precise vulnerability metadata from all available data sources. For example, knowing which version introduced a vulnerability helps determine if one’s software is affected, and knowing when it was fixed results in accurate and timely patching (and a reduced window for potential exploitation). Ideally, this triaging workflow should be automated.

Second, most vulnerabilities are in your dependencies, rather than the code you write or control directly. Thus, even when your code is not changing, there can be a constant churn in your vulnerabilities: some get fixed and others get added.1

Goal: Standard Schema for Vulnerability Databases

Infrastructure and industry standards are needed to track and maintain open source vulnerabilities, understand their consequences, and manage their mitigations. A standard vulnerability schema would allow common tools to work across multiple vulnerability databases and simplify the task of tracking, especially when vulnerabilities touch multiple languages or subsystems.

Goal: Accurate Tracking of Dependencies

Better tooling is needed to understand quickly what software is affected by a newly discovered vulnerability, a problem made harder by the scale and dynamic nature of large dependency trees. Current practices also often make it difficult to predict exactly what versions are used without actually doing an installation, as the software for version resolution is only available through the installer.

Prevent New Vulnerabilities

It would be ideal to prevent vulnerabilities from ever being created, and although testing and analysis tools can help, prevention will always be a hard problem. Here we focus on two specific aspects:

  • Understanding risks when deciding on a new dependency
  • Improving development processes for critical software

Goal: Understand the Risks for New Dependencies

The first category is essentially knowing about vulnerabilities at the time you decide to use a package. Taking on a new dependency has inherent risk and it needs to be an informed decision. Once you have a dependency, it generally becomes harder to remove over time.

Knowing about vulnerabilities is a great start, but there is more that we can do.

Many vulnerabilities arise from lack of adherence to security best practices in software development processes. Are all contributors using two-factor authentication (2FA)? Does the project have continuous integration set up and running tests? Is fuzzing integrated? These are the types of security checks that would help consumers understand the risks they’re taking on with new dependencies. Packages with a low “score” warrant a closer review, and a plan for remediation.

The recently announced Security Scorecards project from OpenSSF attempts to generate these data points in a fully automated way. Using scorecards can also help defend against prevalent typosquatting attacks (malevolent packages with names similar to popular packages), since they would score much lower and fail many security checks.

Improving the development processes for critical software is related to vulnerability prevention, but deserves its own discussion further down in our post.

Fix or Remove Vulnerabilities

The general problem of fixing vulnerabilities is beyond our scope, but there is much we can do for the specific problem of managing vulnerabilities in software dependencies. Today there is little help on this front, but as we improve precision it becomes worthwhile to invest in new processes and tooling.

One option of course is to fix the vulnerability directly. If you can do this in a backwards-compatible way, then the fix is available for everyone. But a challenge is that you are unlikely to have expertise on the problem, nor the direct ability to make changes. Fixing a vulnerability also assumes the software maintainers are aware of the issue, and have the knowledge and resources for vulnerability disclosure.

Conversely, if you simply remove the dependency that contains the vulnerability, then it is fixed for you and those that import or use your software, but not for anyone else. This is a change that is under your direct control.

These scenarios represent the two ends of the chain of dependencies between your software and the vulnerability, but in practice there can be many intervening packages. The general hope is that someone along that dependency chain will fix it. Unfortunately, fixing a link is not enough: Every link of the dependency chain between you and the vulnerability needs to be updated before your software will be fixed. Each link must include the fixed version of the thing below it to purge the vulnerability. Thus, the updates need to be done from the bottom up, unless you can eliminate the dependency altogether, which may require similar heroics and is rarely possible—but is the best solution when it is.

Goal: Understand your Options to Remove Vulnerabilities

Today, we lack clarity on this process: what progress has been made by others and what upgrades should be applied at what level? And where is the process stuck? Who is responsible for fixing the vulnerability itself? Who is responsible for propagating the fix?

Goal: Notifications to Speed Repairs

Eventually, your dependencies will be fixed and you can locally upgrade to the new versions. Knowing when this happens is an important goal as it accelerates reducing the exposure to vulnerabilities. We also need a notification system for the actual discovery of vulnerabilities; often new vulnerabilities represent latent problems that are newly discovered even though the actual code has not changed (such as this 10-year old vulnerability in the Unix utility sudo). For large projects, most such issues will arise in the indirect dependencies. Today, we lack the precision required to do notification well, but as we improve vulnerability precision and metadata (as above), we should also drive notification.

So far, we have only described the easy case: a sequence of upgrades that are all backwards compatible, implying that the behavior is the same except for the absence of the vulnerability.

In practice, an upgrade is often not backward compatible, or is blocked by restrictive version requirements. These issues mean that updating a package deep in the dependency tree must cause some churn, or at least requirement updates, in the things above it. The situation often arises when the fix is made to the latest version, say 1.3, but your software or intervening packages request 1.2. We see this situation often, and it remains a big challenge that is made even harder by the difficulty of getting owners to update intervening packages. Moreover, if you use a package in a thousand places, which is not crazy for a big enterprise, you might need to go through the update process a thousand times.

Goal: Fix the Widely Used Versions

It’s also important to fix the vulnerability in the older versions, especially those in heavy use. Such repair is common practice for the subset of software that has long-term support, but ideally all widely used versions should be fixed, especially for security risks.

Automation could help: given a fix for one version, perhaps we can generate good candidate fixes for other versions. This process is sometimes done by hand today, but if we can make it significantly easier, more versions will actually get patched, and there will be less work to do higher in the chain.

To summarize, we need ways to make fixing vulnerabilities, especially in dependencies, both easier and more timely. We need to increase the chance that there is a fix for widely used versions and not just for the latest version, which is often hard to adopt due to the other changes it includes.

Finally, there are many other options on the “fixing” front, including various kinds of mitigations, such as avoiding certain methods, or limiting risk through sandboxes or access controls. These are important practical options that need more discussion and support.

Prevention for Critical Software

The framing above applies broadly to vulnerabilities, regardless of whether they are due to bad actors or are merely innocent mistakes. Although the suggested goals cover most vulnerabilities, they are not sufficient to prevent malicious behavior. To have a meaningful impact on prevention for bad actors, including supply-chain attacks, we need to improve the processes used for development.

This is a big task, and currently unrealistic for the majority of open source. Part of the beauty of open source is its lack of constraints on the process, which encourages a wide range of contributors. However, that flexibility can hinder security considerations. We want contributors, but we cannot expect everyone to be equally focused on security. Instead, we must identify critical packages and protect them. Such critical packages must be held to a range of higher development standards, even though that might add developer friction.

Goal: Define Criteria for “Critical” Open Source Projects that Merit Higher Standards

It is important to identify the “critical” packages that we all depend upon and whose compromise would endanger critical infrastructure or user privacy. These packages need to be held to higher standards, some of which we outline below.

It is not obvious how to define “critical” and the definition will likely expand over time. Beyond obvious software, such as OpenSSL or key cryptographic libraries, there are widely used packages where their sheer reach makes them worth protecting. We started the Criticality Score project to brainstorm this problem with the community, as well collaborating with Harvard on the Open Source Census efforts.

Goal: No Unilateral Changes to Critical Software

One principle that we follow across Google is that changes should not be unilateral—that is, every change involves at least an author and a reviewer or approver. The goal is to limit what an adversary can do on their own—we need to make sure someone is actually looking at the changes. To do this well for open source is actually quite a bit harder than just within a single company, which can have strong authentication and enforce code reviews and other checks.

Avoiding unilateral changes can be broken down into two sub-goals:

Goal: Require Code Review for Critical Software

Besides being a great process for improving code, reviews ensure that at least one person other than the author is looking at every change. Code reviews are a standard practice for all changes within Google.

Goal: Changes to Critical Software Require Approval by Two Independent Parties

To really achieve the “someone is looking” goal, we need the reviewer to be independent from the contributor. And for critical changes, we probably want more than one independent review. We need to sort out what counts as “independent” review, of course, but the idea of independence is fundamental to reviews in most industries.

Goal: Authentication for Participants in Critical Software

Any notion of independence also implies that you know the actors—an anonymous actor cannot be assumed to be independent or trustworthy. Today, we essentially have pseudonyms: the same person uses an identity repeatedly and thus can have a reputation, but we don’t always know the individual’s trustworthiness. This leads to a range of subgoals:

Goal: For Critical Software, Owners and Maintainers Cannot be Anonymous

Attackers like to have anonymity. There have been past supply-chain attacks where attackers capitalized on anonymity and worked their way through package communities to become maintainers, without anyone realizing this “new maintainer” had malicious intent (compromising source code was eventually injected upstream). To mitigate this risk, our view is that owners and maintainers of critical software must not be anonymous.

It is conceivable that contributors, unlike owners and maintainers, could be anonymous, but only if their code has passed multiple reviews by trusted parties.

It is also conceivable that we could have “verified” identities, in which a trusted entity knows the real identity, but for privacy reasons the public does not. This would enable decisions about independence as well as prosecution for illegal behavior.

Goal: Strong Authentication for Contributors of Critical Software

Malicious actors look for easy attack vectors, so phishing attacks and other forms of theft related to credentials are common. One obvious improvement would be the required use of two-factor authentication, especially for owners and maintainers.

Goal: A Federated Model for Identities

To continue the inclusive nature of open source, we need to be able to trust a wide range of identities, but still with verified integrity. This implies a federated model for identities, perhaps similar to how we support federated SSL certificates today—a range of groups can generate valid certificates, but with strong auditing and mutual oversight.

Discussions on this topic are starting to take place in the OpenSSF’s Digital Identity Attestation Working Group.

Goal: Notification for Changes in Risk

We should extend notifications to cover changes in risk. The most obvious is ownership changes, which can be a prelude to new attacks (such as the recent NPM event-stream compromise). Other examples include discovery of stolen credentials, collusion, or other bad actor behavior.

Goal: Transparency for Artifacts

It is common to use secure hashes to detect if an artifact has arrived intact, and digital signatures to prove authenticity. Adding “transparency” means that these attestations are logged publicly and thus document what was intended. In turn, external parties can monitor the logs for fake versions even if users are unaware. Going a step further, when credentials are stolen, we can know what artifacts were signed using those credentials and work to remove them. This kind of transparency, including the durable public logs and the third-party monitoring, has been used to great success for SSL certificates, and we have proposed one way to do this for package managers. Knowing you have the right package or binary is similar to knowing you are visiting the real version of a web site.

Goal: Trust the Build Process

Ken Thompson’s Turing Award lecture famously demonstrated in 1984 that authentic source code alone is not enough, and recent events have shown this attack is a real threat. How do you trust your build system? All the components of it must be trusted and verified through a continuous process of building trust.

Reproducible builds help—there is a deterministic outcome for the build and we can thus verify that we got it right—but are harder to achieve due to ephemeral data (such as timestamps) ending up in the release artifact. And safe reproducible builds require verification tools, which in turn must be built verifiably and reproducibly, and so on. We must construct a network of trusted tools and build products.

Trust in both the artifacts and the tools can be established via “delegation”, through a variant of the transparency process described above called binary authorization. Internally, the Google build system signs all artifacts and produces a manifest that ties it to the source code. For open source, one or more trusted agents could run the build as a service, signing the artifact to prove that they are accountable for its integrity. This kind of ecosystem should exist and mostly needs awareness and some agreements on the format of attestations, so that we can automate the processes securely.

The actions in this section are great for software in general, and are essentially in use today within Google, but they are heavier weight than usual for open source. Our hope is that by focusing on the subset of software that is critical, we can achieve these goals at least for that set. As the tooling and automation get better, these goals will become easier to adopt more widely.

Summary

The nature of open source requires that we solve problems through consensus and collaboration. For complex topics such as vulnerabilities, this implies focused discussion around the key issues. We presented one way to frame this discussion, and defined a set of goals that we hope will accelerate industry-wide discourse and the ultimate solutions. The first set of goals apply broadly to vulnerabilities and are really about enabling automation and reducing risk and toil.

However, these goals are not enough in the presence of adversaries or to prevent “supply chain” attacks. Thus we propose a second set of goals for critical software. The second set is more onerous and therefore will meet some resistance, but we believe the extra constraints are fundamental for security. The intention is to define collectively the set of “critical” software packages, and apply these higher standards only to this set.

Although we have various opinions on how to meet both sets of goals, we are but one voice in a space where consensus and sustainable solutions matter most of all. We look forward to this discussion, to promoting the best ideas, and eventually to solutions that both strengthen and streamline the security of open source that we all depend on.

Notes


  1. Ideally, depended-upon versions should be stable absent an explicit upgrade, but behavior varies depending on the packaging system. Two that aim for stability rather than fast upgrades are Go Modules and NuGet, both of which by default install upgrades only when the requirements are updated; the dependencies might be wrong, but they only change with explicit updates. 



Executive Summary:

The security of open source software has rightfully garnered the industry’s attention, but solutions require consensus about the challenges and cooperation in the execution. The problem is complex and there are many facets to cover: supply chain, dependency management, identity, and build pipelines. Solutions come faster when the problem is well-framed; we propose a framework (“Know, Prevent, Fix”) for how the industry can think about vulnerabilities in open source and concrete areas to address first, including:

  • Consensus on metadata and identity standards: We need consensus on fundamentals to tackle these complex problems as an industry. Agreements on metadata details and identities will enable automation, reduce the effort required to update software, and minimize the impact of vulnerabilities.
  • Increased transparency and review for critical software: For software that is critical to security, we need to agree on development processes that ensure sufficient review, avoid unilateral changes, and transparently lead to well-defined, verifiable official versions.

The following framework and goals are proposed with the intention of sparking industry-wide discussion and progress on the security of open source software.


Due to recent events, the software world gained a deeper understanding about the real risk of supply-chain attacks. Open source software should be less risky on the security front, as all of the code and dependencies are in the open and available for inspection and verification. And while that is generally true, it assumes people are actually looking. With so many dependencies, it is impractical to monitor them all, and many open source packages are not well maintained.

It is common for a program to depend, directly or indirectly, on thousands of packages and libraries. For example, Kubernetes now depends on about 1,000 packages. Open source likely makes more use of dependencies than closed source, and from a wider range of suppliers; the number of distinct entities that need to be trusted can be very high. This makes it extremely difficult to understand how open source is used in products and what vulnerabilities might be relevant. There is also no assurance that what is built matches the source code.

Taking a step back, although supply-chain attacks are a risk, the vast majority of vulnerabilities are mundane and unintentional—honest errors made by well-intentioned developers. Furthermore, bad actors are more likely to exploit known vulnerabilities than to find their own: it’s just easier. As such, we must focus on making fundamental changes to address the majority of vulnerabilities, as doing so will move the entire industry far along in addressing the complex cases as well, including supply-chain attacks.

Few organizations can verify all of the packages they use, let alone all of the updates to those packages. In the current landscape, tracking these packages takes a non-trivial amount of infrastructure, and significant manual effort. At Google, we have those resources and go to extraordinary lengths to manage the open source packages we use—including keeping a private repo of all open source packages we use internally—and it is still challenging to track all of the updates. The sheer flow of updates is daunting. A core part of any solution will be more automation, and this will be a key theme for our open source security work in 2021 and beyond.

Because this is a complex problem that needs industry cooperation, our purpose here is to focus the conversation around concrete goals. Google co-founded the OpenSSF to be a focal point for this collaboration, but to make progress, we need participation across the industry, and agreement on what the problems are and how we might address them. To get the discussion started, we present one way to frame this problem, and a set of concrete goals that we hope will accelerate industry-wide solutions.

We suggest framing the challenge as three largely independent problem areas, each with concrete objectives:

  1. Know about the vulnerabilities in your software
  2. Prevent the addition of new vulnerabilities, and
  3. Fix or remove vulnerabilities.

A related but separate problem, which is critical to securing the supply chain, is improving the security of the development process. We’ve outlined the challenges of this problem and proposed goals in the fourth section, Prevention for Critical Software.

Know your Vulnerabilities

Knowing your vulnerabilities is harder than expected for many reasons. Although there are mechanisms for reporting vulnerabilities, it is hard to know if they actually affect the specific versions of software you are using.

Goal: Precise Vulnerability Data

First, it is crucial to capture precise vulnerability metadata from all available data sources. For example, knowing which version introduced a vulnerability helps determine if one’s software is affected, and knowing when it was fixed results in accurate and timely patching (and a reduced window for potential exploitation). Ideally, this triaging workflow should be automated.

Second, most vulnerabilities are in your dependencies, rather than the code you write or control directly. Thus, even when your code is not changing, there can be a constant churn in your vulnerabilities: some get fixed and others get added.1

Goal: Standard Schema for Vulnerability Databases

Infrastructure and industry standards are needed to track and maintain open source vulnerabilities, understand their consequences, and manage their mitigations. A standard vulnerability schema would allow common tools to work across multiple vulnerability databases and simplify the task of tracking, especially when vulnerabilities touch multiple languages or subsystems.

Goal: Accurate Tracking of Dependencies

Better tooling is needed to understand quickly what software is affected by a newly discovered vulnerability, a problem made harder by the scale and dynamic nature of large dependency trees. Current practices also often make it difficult to predict exactly what versions are used without actually doing an installation, as the software for version resolution is only available through the installer.

Prevent New Vulnerabilities

It would be ideal to prevent vulnerabilities from ever being created, and although testing and analysis tools can help, prevention will always be a hard problem. Here we focus on two specific aspects:

  • Understanding risks when deciding on a new dependency
  • Improving development processes for critical software

Goal: Understand the Risks for New Dependencies

The first category is essentially knowing about vulnerabilities at the time you decide to use a package. Taking on a new dependency has inherent risk and it needs to be an informed decision. Once you have a dependency, it generally becomes harder to remove over time.

Knowing about vulnerabilities is a great start, but there is more that we can do.

Many vulnerabilities arise from lack of adherence to security best practices in software development processes. Are all contributors using two-factor authentication (2FA)? Does the project have continuous integration set up and running tests? Is fuzzing integrated? These are the types of security checks that would help consumers understand the risks they’re taking on with new dependencies. Packages with a low “score” warrant a closer review, and a plan for remediation.

The recently announced Security Scorecards project from OpenSSF attempts to generate these data points in a fully automated way. Using scorecards can also help defend against prevalent typosquatting attacks (malevolent packages with names similar to popular packages), since they would score much lower and fail many security checks.

Improving the development processes for critical software is related to vulnerability prevention, but deserves its own discussion further down in our post.

Fix or Remove Vulnerabilities

The general problem of fixing vulnerabilities is beyond our scope, but there is much we can do for the specific problem of managing vulnerabilities in software dependencies. Today there is little help on this front, but as we improve precision it becomes worthwhile to invest in new processes and tooling.

One option of course is to fix the vulnerability directly. If you can do this in a backwards-compatible way, then the fix is available for everyone. But a challenge is that you are unlikely to have expertise on the problem, nor the direct ability to make changes. Fixing a vulnerability also assumes the software maintainers are aware of the issue, and have the knowledge and resources for vulnerability disclosure.

Conversely, if you simply remove the dependency that contains the vulnerability, then it is fixed for you and those that import or use your software, but not for anyone else. This is a change that is under your direct control.

These scenarios represent the two ends of the chain of dependencies between your software and the vulnerability, but in practice there can be many intervening packages. The general hope is that someone along that dependency chain will fix it. Unfortunately, fixing a link is not enough: Every link of the dependency chain between you and the vulnerability needs to be updated before your software will be fixed. Each link must include the fixed version of the thing below it to purge the vulnerability. Thus, the updates need to be done from the bottom up, unless you can eliminate the dependency altogether, which may require similar heroics and is rarely possible—but is the best solution when it is.

Goal: Understand your Options to Remove Vulnerabilities

Today, we lack clarity on this process: what progress has been made by others and what upgrades should be applied at what level? And where is the process stuck? Who is responsible for fixing the vulnerability itself? Who is responsible for propagating the fix?

Goal: Notifications to Speed Repairs

Eventually, your dependencies will be fixed and you can locally upgrade to the new versions. Knowing when this happens is an important goal as it accelerates reducing the exposure to vulnerabilities. We also need a notification system for the actual discovery of vulnerabilities; often new vulnerabilities represent latent problems that are newly discovered even though the actual code has not changed (such as this 10-year old vulnerability in the Unix utility sudo). For large projects, most such issues will arise in the indirect dependencies. Today, we lack the precision required to do notification well, but as we improve vulnerability precision and metadata (as above), we should also drive notification.

So far, we have only described the easy case: a sequence of upgrades that are all backwards compatible, implying that the behavior is the same except for the absence of the vulnerability.

In practice, an upgrade is often not backward compatible, or is blocked by restrictive version requirements. These issues mean that updating a package deep in the dependency tree must cause some churn, or at least requirement updates, in the things above it. The situation often arises when the fix is made to the latest version, say 1.3, but your software or intervening packages request 1.2. We see this situation often, and it remains a big challenge that is made even harder by the difficulty of getting owners to update intervening packages. Moreover, if you use a package in a thousand places, which is not crazy for a big enterprise, you might need to go through the update process a thousand times.

Goal: Fix the Widely Used Versions

It’s also important to fix the vulnerability in the older versions, especially those in heavy use. Such repair is common practice for the subset of software that has long-term support, but ideally all widely used versions should be fixed, especially for security risks.

Automation could help: given a fix for one version, perhaps we can generate good candidate fixes for other versions. This process is sometimes done by hand today, but if we can make it significantly easier, more versions will actually get patched, and there will be less work to do higher in the chain.

To summarize, we need ways to make fixing vulnerabilities, especially in dependencies, both easier and more timely. We need to increase the chance that there is a fix for widely used versions and not just for the latest version, which is often hard to adopt due to the other changes it includes.

Finally, there are many other options on the “fixing” front, including various kinds of mitigations, such as avoiding certain methods, or limiting risk through sandboxes or access controls. These are important practical options that need more discussion and support.

Prevention for Critical Software

The framing above applies broadly to vulnerabilities, regardless of whether they are due to bad actors or are merely innocent mistakes. Although the suggested goals cover most vulnerabilities, they are not sufficient to prevent malicious behavior. To have a meaningful impact on prevention for bad actors, including supply-chain attacks, we need to improve the processes used for development.

This is a big task, and currently unrealistic for the majority of open source. Part of the beauty of open source is its lack of constraints on the process, which encourages a wide range of contributors. However, that flexibility can hinder security considerations. We want contributors, but we cannot expect everyone to be equally focused on security. Instead, we must identify critical packages and protect them. Such critical packages must be held to a range of higher development standards, even though that might add developer friction.

Goal: Define Criteria for “Critical” Open Source Projects that Merit Higher Standards

It is important to identify the “critical” packages that we all depend upon and whose compromise would endanger critical infrastructure or user privacy. These packages need to be held to higher standards, some of which we outline below.

It is not obvious how to define “critical” and the definition will likely expand over time. Beyond obvious software, such as OpenSSL or key cryptographic libraries, there are widely used packages where their sheer reach makes them worth protecting. We started the Criticality Score project to brainstorm this problem with the community, as well collaborating with Harvard on the Open Source Census efforts.

Goal: No Unilateral Changes to Critical Software

One principle that we follow across Google is that changes should not be unilateral—that is, every change involves at least an author and a reviewer or approver. The goal is to limit what an adversary can do on their own—we need to make sure someone is actually looking at the changes. To do this well for open source is actually quite a bit harder than just within a single company, which can have strong authentication and enforce code reviews and other checks.

Avoiding unilateral changes can be broken down into two sub-goals:

Goal: Require Code Review for Critical Software

Besides being a great process for improving code, reviews ensure that at least one person other than the author is looking at every change. Code reviews are a standard practice for all changes within Google.

Goal: Changes to Critical Software Require Approval by Two Independent Parties

To really achieve the “someone is looking” goal, we need the reviewer to be independent from the contributor. And for critical changes, we probably want more than one independent review. We need to sort out what counts as “independent” review, of course, but the idea of independence is fundamental to reviews in most industries.

Goal: Authentication for Participants in Critical Software

Any notion of independence also implies that you know the actors—an anonymous actor cannot be assumed to be independent or trustworthy. Today, we essentially have pseudonyms: the same person uses an identity repeatedly and thus can have a reputation, but we don’t always know the individual’s trustworthiness. This leads to a range of subgoals:

Goal: For Critical Software, Owners and Maintainers Cannot be Anonymous

Attackers like to have anonymity. There have been past supply-chain attacks where attackers capitalized on anonymity and worked their way through package communities to become maintainers, without anyone realizing this “new maintainer” had malicious intent (compromising source code was eventually injected upstream). To mitigate this risk, our view is that owners and maintainers of critical software must not be anonymous.

It is conceivable that contributors, unlike owners and maintainers, could be anonymous, but only if their code has passed multiple reviews by trusted parties.

It is also conceivable that we could have “verified” identities, in which a trusted entity knows the real identity, but for privacy reasons the public does not. This would enable decisions about independence as well as prosecution for illegal behavior.

Goal: Strong Authentication for Contributors of Critical Software

Malicious actors look for easy attack vectors, so phishing attacks and other forms of theft related to credentials are common. One obvious improvement would be the required use of two-factor authentication, especially for owners and maintainers.

Goal: A Federated Model for Identities

To continue the inclusive nature of open source, we need to be able to trust a wide range of identities, but still with verified integrity. This implies a federated model for identities, perhaps similar to how we support federated SSL certificates today—a range of groups can generate valid certificates, but with strong auditing and mutual oversight.

Discussions on this topic are starting to take place in the OpenSSF’s Digital Identity Attestation Working Group.

Goal: Notification for Changes in Risk

We should extend notifications to cover changes in risk. The most obvious is ownership changes, which can be a prelude to new attacks (such as the recent NPM event-stream compromise). Other examples include discovery of stolen credentials, collusion, or other bad actor behavior.

Goal: Transparency for Artifacts

It is common to use secure hashes to detect if an artifact has arrived intact, and digital signatures to prove authenticity. Adding “transparency” means that these attestations are logged publicly and thus document what was intended. In turn, external parties can monitor the logs for fake versions even if users are unaware. Going a step further, when credentials are stolen, we can know what artifacts were signed using those credentials and work to remove them. This kind of transparency, including the durable public logs and the third-party monitoring, has been used to great success for SSL certificates, and we have proposed one way to do this for package managers. Knowing you have the right package or binary is similar to knowing you are visiting the real version of a web site.

Goal: Trust the Build Process

Ken Thompson’s Turing Award lecture famously demonstrated in 1984 that authentic source code alone is not enough, and recent events have shown this attack is a real threat. How do you trust your build system? All the components of it must be trusted and verified through a continuous process of building trust.

Reproducible builds help—there is a deterministic outcome for the build and we can thus verify that we got it right—but are harder to achieve due to ephemeral data (such as timestamps) ending up in the release artifact. And safe reproducible builds require verification tools, which in turn must be built verifiably and reproducibly, and so on. We must construct a network of trusted tools and build products.

Trust in both the artifacts and the tools can be established via “delegation”, through a variant of the transparency process described above called binary authorization. Internally, the Google build system signs all artifacts and produces a manifest that ties it to the source code. For open source, one or more trusted agents could run the build as a service, signing the artifact to prove that they are accountable for its integrity. This kind of ecosystem should exist and mostly needs awareness and some agreements on the format of attestations, so that we can automate the processes securely.

The actions in this section are great for software in general, and are essentially in use today within Google, but they are heavier weight than usual for open source. Our hope is that by focusing on the subset of software that is critical, we can achieve these goals at least for that set. As the tooling and automation get better, these goals will become easier to adopt more widely.

Summary

The nature of open source requires that we solve problems through consensus and collaboration. For complex topics such as vulnerabilities, this implies focused discussion around the key issues. We presented one way to frame this discussion, and defined a set of goals that we hope will accelerate industry-wide discourse and the ultimate solutions. The first set of goals apply broadly to vulnerabilities and are really about enabling automation and reducing risk and toil.

However, these goals are not enough in the presence of adversaries or to prevent “supply chain” attacks. Thus we propose a second set of goals for critical software. The second set is more onerous and therefore will meet some resistance, but we believe the extra constraints are fundamental for security. The intention is to define collectively the set of “critical” software packages, and apply these higher standards only to this set.

Although we have various opinions on how to meet both sets of goals, we are but one voice in a space where consensus and sustainable solutions matter most of all. We look forward to this discussion, to promoting the best ideas, and eventually to solutions that both strengthen and streamline the security of open source that we all depend on.

Notes


  1. Ideally, depended-upon versions should be stable absent an explicit upgrade, but behavior varies depending on the packaging system. Two that aim for stability rather than fast upgrades are Go Modules and NuGet, both of which by default install upgrades only when the requirements are updated; the dependencies might be wrong, but they only change with explicit updates. 

ESET researchers publish a white paper about unique multiplatform malware they’ve named Kobalos

The post Kobalos – A complex Linux threat to high performance computing infrastructure appeared first on WeLiveSecurity

ESET researchers uncover a supply-chain attack used in a cyberespionage operation targeting online‑gaming communities in Asia

The post Operation NightScout: Supply‑chain attack targets online gaming in Asia appeared first on WeLiveSecurity

The Android platform team is committed to securing Android for every user across every device. In addition to monthly security updates to patch vulnerabilities reported to us through our Vulnerability Rewards Program (VRP), we also proactively architect Android to protect against undiscovered vulnerabilities through hardening measures such as applying compiler-based mitigations and improving sandboxing. This post focuses on the decision-making process that goes into these proactive measures: in particular, how we choose which hardening techniques to deploy and where they are deployed. As device capabilities vary widely within the Android ecosystem, these decisions must be made carefully, guided by data available to us to maximize the value to the ecosystem as a whole.

The overall approach to Android Security is multi-pronged and leverages several principles and techniques to arrive at data-guided solutions to make future exploitation more difficult. In particular, when it comes to hardening the platform, we try to answer the following questions:

  • What data are available and how can they guide security decisions?
  • What mitigations are available, how can they be improved, and where should they be enabled?
  • What are the deployment challenges of particular mitigations and what tradeoffs are there to consider?

By shedding some light on the process we use to choose security features for Android, we hope to provide a better understanding of Android’s overall approach to protecting our users.

Data-driven security decision-making

We use a variety of sources to determine what areas of the platform would benefit the most from different types of security mitigations. The Android Vulnerability Rewards Program (VRP) is one very informative source: all vulnerabilities submitted through this program are analyzed by our security engineers to determine the root cause of each vulnerability and its overall severity (based on these guidelines). Other sources are internal and external bug-reports, which identify vulnerable components and reveal coding practices that commonly lead to errors. Knowledge of problematic code patterns combined with the prevalence and severity of the vulnerabilities they cause can help inform decisions about which mitigations are likely to be the most beneficial.


Types of Critical and High severity vulnerabilities fixed in Android Security Bulletins in 2019

Relying purely on vulnerability reports is not sufficient as the data are inherently biased: often, security researchers flock to “hot” areas, where other researchers have already found vulnerabilities (e.g. Stagefright). Or they may focus on areas where readily-available tools make it easier to find bugs (for instance, if a security research tool is posted to Github, other researchers commonly utilize that tool to explore deeper).

To ensure that mitigation efforts are not biased only toward areas where bugs and vulnerabilities have been reported, internal Red Teams analyze less scrutinized or more complex parts of the platform. Also, continuous automated fuzzers run at-scale on both Android virtual machines and physical devices. This also ensures that bugs can be found and fixed early in the development lifecycle. Any vulnerabilities uncovered through this process are also analyzed for root cause and severity, which inform mitigation deployment decisions.

The Android VRP rewards submissions of full exploit-chains that demonstrate a full end-to-end attack. These exploit-chains, which generally utilize multiple vulnerabilities, are very informative in demonstrating techniques that attackers use to chain vulnerabilities together to accomplish their goals. Whenever a researcher submits a full exploit chain, a team of security engineers analyzes and documents the overall approach, each link in the chain, and any innovative attack strategies used. This analysis informs which exploit mitigation strategies could be employed to prevent pivoting directly from one vulnerability to another (some examples include Address Space Layout Randomization and Control-Flow Integrity) and whether the process’s attack surface could be reduced if it has unnecessary access to resources.

There are often multiple different ways to use a collection of vulnerabilities to create an exploit chain. Therefore a defense-in-depth approach is beneficial, with the goal of reducing the usefulness of some vulnerabilities and lengthening exploit chains so that successful exploitation requires more vulnerabilities. This increases the cost for an attacker to develop a full exploit chain.

Keeping up with developments in the wider security community helps us understand the current threat landscape, what techniques are currently used for exploitation, and what future trends look like. This involves but is not limited to:

  • Close collaboration with the external security research community
  • Reading journals and attending conferences
  • Monitoring techniques used by malware
  • Following security research trends in security communities
  • Participating in external efforts and projects such as KSPP, syzbot, LLVM, Rust, and more

All of these data sources provide feedback for the overall security hardening strategy, where new mitigations should be deployed, and what existing security mitigations should be improved.

Reasoning About Security Hardening

Hardening and Mitigations

Analyzing the data reveals areas where broader mitigations can eliminate entire classes of vulnerabilities. For instance, if parts of the platform show a large number of vulnerabilities due to integer overflow bugs, they are good candidates to enable Undefined Behavior Sanitizer (UBSan) mitigations such as the Integer Overflow Sanitizer. When common patterns in memory access vulnerabilities appear, they inform efforts to build hardened memory allocators (enabled by default in Android 11) and implement mitigations (such as CFI) against exploitation techniques that provide better resilience against memory overflows or Use-After-Free vulnerabilities.

Before discussing how the data can be used, it is important to understand how we classify our overall efforts in hardening the platform. There are a few broadly defined buckets that hardening techniques and mitigations fit into (though sometimes a particular mitigation may not fit cleanly into any single one):

  • Exploit mitigations
    • Deterministic runtime prevention of vulnerabilities detects undefined or unexpected behavior and aborts execution when the behavior is detected. This turns potential memory corruption vulnerabilities into less harmful crashes. Often these mitigations can be enabled selectively and still be effective because they impact individual bugs. Examples include Integer Sanitizer and Bounds Sanitizer.
    • Exploitation technique mitigations target the techniques used to pivot from one vulnerability to another or to gain code execution. These mitigations theoretically may render some vulnerabilities useless, but more often serve to constrain the actions available to attackers seeking to exploit vulnerabilities. This increases the difficulty of exploit development in terms of time and resources. These mitigations may need to be enabled across an entire process’s memory space to be effective. Examples include Address Space Layout Randomization, Control Flow Integrity (CFI), Stack Canaries and Memory Tagging.
    • Compiler transformations that change undefined behavior to defined behavior at compile-time. This prevents attackers from taking advantage of undefined behavior such as uninitialized memory. An example of this is stack initialization.
  • Architectural decomposition
    • Splits larger, more privileged components into smaller pieces, each of which has fewer privileges than the original. After this decomposition, a vulnerability in one of the smaller components will have reduced severity by providing less access to the system, lengthening exploit chains, and making it harder for an attacker to gain access to sensitive data or additional privilege escalation paths.
  • Sandboxing/isolation
    • Related to architectural decomposition, enforces a minimal set of permissions/capabilities that a process needs to correctly function, often through mandatory and/or discretionary access control. Like architectural decomposition, this makes vulnerabilities in these processes less valuable as there are fewer things attackers can do in that execution context, by applying the principle of least privilege. Some examples are Android Permissions, Unix Permissions, Linux Capabilities, SELinux, and Seccomp.
  • Migrating to memory-safe languages
    • C and C++ do not provide memory safety the way that languages like Java, Kotlin, and Rust do. Given that the majority of security vulnerabilities reported to Android are memory safety issues, a two-pronged approach is applied: improving the safety of C/C++ while also encouraging the use of memory safe languages.

Enabling these mitigations

With the broad arsenal of mitigation techniques available, which of these to employ and where to apply them depends on the type of problem being solved. For instance, a monolithic process that handles a lot of untrusted data and does complex parsing would be a good candidate for all of these. The media frameworks provide an excellent historical example where an architectural decomposition enabled incrementally turning on more exploit mitigations and deprivileging.

Architectural decomposition and isolation of the Media Frameworks over time

Remotely reachable attack surfaces such as NFC, Bluetooth, WiFi, and media components have historically housed the most severe vulnerabilities, and as such these components are also prioritized for hardening. These components often contain some of the most common vulnerability root causes that are reported in the VRP, and we have recently enabled sanitizers in all of them.

Libraries and processes that enforce or sit at security boundaries, such as libbinder, and widely-used core libraries such as libui, libcore, and libcutils are good targets for exploit mitigations since these are not process-specific. However, due to performance and stability sensitivities around these core libraries, mitigations need to be supported by strong evidence of their security impact.

Finally, the kernel’s high level of privilege makes it an important target for hardening as well. Because different codebases have different characteristics and functionality, susceptibility to and prevalence of certain kinds of vulnerabilities will differ. Stability and performance of mitigations here are exceptionally important to avoid negatively impacting the user experience, and some mitigations that make sense to deploy in user space may not be applicable or effective. Therefore our considerations for which hardening strategies to employ in the kernel are based on a separate analysis of the available kernel-specific data.

This data-driven approach has led to tangible and measurable results. Starting in 2015 with Stagefright, a large number of Critical severity vulnerabilities were reported in Android’s media framework. These were especially sensitive because many of these vulnerabilities were remotely reachable. This led to a large architectural decomposition effort in Android Nougat, followed by additional efforts to improve our ability to patch media vulnerabilities quickly. Thanks to these changes, in 2020 we had no internet-reachable Critical severity vulnerabilities reported to us in the media frameworks.

Deployment Considerations

Some of these mitigations provide more value than others, so it is important to focus engineering resources where they are most effective. This involves weighing the performance cost of each mitigation as well as how much work is required to deploy it and support it without negatively affecting device stability or user experience.

Performance

Understanding the performance impact of a mitigation is a critical step toward enabling it. Adding too much overhead to some components or the entire system can negatively impact user experience by reducing battery life and making the device less responsive. This is especially true for entry-level devices, which should benefit from hardening as well. We thus want to prioritize engineering efforts on impactful mitigations with acceptable overheads.

When investigating performance, important factors include not just CPU time but also memory increase, code size, battery life, and UI jank. These factors are especially important to consider for more constrained entry-level devices, to ensure that the mitigations perform well across the entire Android ecosystem.

The system-wide performance impact of a mitigation is also dependent on where that mitigation is enabled, as certain components are more performance-sensitive than others. For example, binder is one of the most used paths for interprocess communication, so even small additional overhead could significantly impact user experience on a device. On the other hand, video players only need to ensure that frames are rendered at the source framerate; if frames are rendered much faster than the rate at which they are displayed, additional overhead may be more acceptable.

Benchmarks, if available, can be extremely useful to evaluate the performance impact of a mitigation. If there are no benchmarks for a certain component, new ones should be created, for instance by calling impacted codec code to decode a media file. If this testing reveals unacceptable overhead, there are often a few options to address it:

  • Selectively disable the mitigation in performance-sensitive functions identified during benchmarks. A small number of functions are often responsible for a large part of the runtime overhead, so disabling the mitigation in those functions can maximize the security benefit while minimizing the performance cost. Here is an example of this in one of the media codecs. These exempted functions must be manually reviewed for bugs to reduce the risk of disabling the mitigation there.
  • Optimize the implementation of the mitigation to improve its performance. This often involves modifying the compiler. For example, our team has upstreamed optimizations to the Integer Overflow Sanitizer and the Bounds Sanitizer.
  • Certain mitigations, such as the Scudo allocator’s built-in robustness against heap-based vulnerabilities, have tunable parameters that can be tweaked to improve performance.

Most of these improvements involve changes or contributions to the LLVM project. By working with upstream LLVM, these improvements have impact and benefit beyond Android. At the same time Android benefits from upstream improvements when others in the LLVM community make improvements as well.

Deployment and Support

There is more to consider when enabling a mitigation than its security benefit and performance cost, such as the cost of short-term deployment and long-term support.

Deployment Stability Considerations

One important issue is whether a mitigation can contain false positives. For example, if the Bounds Sanitizer produces an error, there is definitely an out-of-bounds access (although it might not be exploitable). But the Integer Overflow Sanitizer can produce false positives, as many integer overflows are harmless or even perfectly expected and correct.

It is thus important to consider the impact of a mitigation on the stability of the system. Whether a crash is due to a false positive or a legitimate security issue, it still disrupts the user experience and so is undesirable. This is another reason to carefully consider which components should have which mitigations, as crashes in some components are worse than others. If a mitigation causes a crash in a media codec, the user’s video playback will be stopped, but if netd crashes during an update, the phone could be bricked. For a mitigation like Bounds Sanitizer, where false positives are not an issue, we still need to perform extensive testing to ensure the device remains stable. Off-by-one errors, for example, may not crash during normal operation, but Bounds Sanitizer would abort execution and result in instability.

Another consideration is whether it is possible to enumerate everything a mitigation might break. For example, it is not easy to contain the risk of the Integer Overflow Sanitizer without extensive testing, as it is difficult to determine which overflows are intentional/benign (and thus should be allowed) and which could lead to vulnerabilities.

Support

We must consider not just issues caused by deploying mitigations but also how to support them long-term. This includes the developer time to integrate a mitigation into existing systems, enable and debug it, deploy it onto devices, and support it after launch. SELinux is a good example of this; it takes a significant amount of effort to write the policy for a new device, and even once enforcing mode is enabled, the policy must be supported for years as code changes and functionality is added or removed.

We try to make mitigations less disruptive and spread awareness of how they affect developers. This is done by making documentation available on source.android.com and by improving existing algorithms to reduce false positives. Making it easier to debug mitigations when something goes wrong reduces the developer maintenance burden that can accompany mitigations. For example, when developers found it difficult to identify UBSan errors, we enabled support for the UBSan Minimal Runtime by default in the Android build system. The minimal runtime itself was first upstreamed by others at Google specifically for this purpose. When the Integer Overflow Sanitizer crashes a program, that adds the following hint to the generic SIGABRT crash message:

    Abort message: 'ubsan: sub-overflow'

Developers who see this message then know to enable diagnostics mode, which prints out details about the crash:

    frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp:2188:32: runtime error: unsigned integer overflow: 0 - 1 cannot be represented in type 'size_t' (aka 'unsigned long')

Similarly, upstream SELinux provides a tool called audit2allow that can be used to suggest rules to allow blocked behaviors:

    adb logcat -d | audit2allow -p policy

#============= rmt ==============
allow rmt kmem_device:chr_file { read write };

A debugging tool does not need to be perfect to be helpful; audit2allow does not always suggest the correct options, but for developers without detailed knowledge of SELinux it provides a strong starting point.

Conclusion

With every Android release, our team works hard to balance security improvements that benefit the entire ecosystem with performance and stability, drawing heavily from the data that are available to us. We hope that this sheds some light on the particular challenges involved and the overall process that leads to mitigations introduced in each Android release.

Thank you to Jeff Vander Stoep for contributions to this blog post.

Law enforcement disrupts Emotet – Wormable Android malware spreading via WhatsApp – Three iOS zero-day bugs squashed

The post Week in security with Tony Anscombe appeared first on WeLiveSecurity

Law enforcement disrupts Emotet – Wormable Android malware spreading via WhatsApp – Three iOS zero-day bugs squashed

The post Week in security with Tony Anscombe appeared first on WeLiveSecurity

The law enforcement action is one of the most significant operations against cybercriminal enterprises ever

The post Emotet botnet disrupted in global operation appeared first on WeLiveSecurity

The law enforcement action is one of the most significant operations against cybercriminal enterprises ever

The post Emotet botnet disrupted in global operation appeared first on WeLiveSecurity