The Republic of Agora

State Behav. & Cyber Security


State Permissive Behaviours and Commercial Offensive-Cyber Proliferation

Gareth Mott, et al. | 2024.10.18

This paper seeks to identify how state “permissive” behaviours can contribute to the proliferation of offensive-cyber tools and services.

Commercial cyber tools and services have many legitimate applications, from corporate penetration testing (an authorised simulated cyber attack on an IT system) to law enforcement and national security operations. But they are also subject to misuse and abuse, when they are used in ways that are contrary to national or international law, violate the human rights of their targets, or pose risks to international security. Some states are currently grappling with this policy challenge. Meanwhile, collective international initiatives for action are underway.

For example, there is the US’s 2023 Joint Statement on Efforts to Counter the Proliferation and Misuse of Commercial Spyware and the UK- and France-led Pall Mall Process of 2024. Ultimately, one aim of these initiatives is to enable states to harmonise their policy interventions where possible.

To inform principles and policies for intervention at national and international levels, it is necessary to understand the dynamics that encourage or facilitate offensive-cyber proliferation. This paper identifies a range of “non-state proliferating factors” (NPFs) and “state permissive behaviours” (SPBs), and its findings draw on desk-based research on the international commercial offensive-cyber market. These findings were supplemented by a data validation and consultative workshop with industry stakeholders held in person at Chatham House in March 2024. This half-day validation workshop drew on the expertise and insights of 44 participants predominantly based in the UK, the US and Western Europe. To facilitate candid discussion, remarks made at the workshop are not attributable, and the identities of participants are not referenced here.

In this paper, NPFs and SPBs are categorised into five areas:

  1. Regulation of corporate structure and governance.

  2. Legal frameworks for product development, sale and transfer.

  3. Diplomatic support and engagement.

  4. Development of cyber-security ecosystem and workforce.

  5. Integration with defence and security industrial base.

Using these categories, this research analyses the roles of both state and non-state actors. It identifies critical inter-relationships between different SPBs and NPFs that serve to facilitate or enable potentially irresponsible offensive-cyber proliferation.

Introduction

Commercial cyber tools enable a variety of capabilities, including gaining access to technical systems, moving through technical ecosystems, providing visibility of user activity, and exfiltrating data. The goals of users of these tools vary greatly. In the mass market, cyber-security professionals use them to determine where an organisation or individual may be susceptible to attack so that necessary steps can be taken to mitigate this risk. In the more limited law enforcement market, governments may use these kinds of tools to monitor criminal activity and capture evidence for prosecution, or to gain access to technical assets to investigate or prevent crime. There is also a variety of commercial services that offer offensive capabilities along similar lines, for example penetration testers, which intentionally emulate the behaviour of attackers in order to report to clients the opportunities for and impact of an intrusion into their systems.

The demand for commercially accessible offensive-cyber tools and services has expanded markedly in recent years, with at least 80 states having purchased offensive tools. Unfortunately, these commercial cyber products and services are also subject to misuse and abuse. There are broad reports of these tools and services being used in ways that are contrary to national or international law, violating the human rights of their targets, and posing risks to international security. The proliferation of these products and services presents an expanding set of risks to states and, in some cases, challenges commitments to protecting openness, security and stability in cyberspace. From selling products to conducting operations, paid attackers and companies are routinely hired on behalf of governments or other customers.

“Offensive-cyber proliferation” refers to the increasing access that a wider range of actors has to increasingly advanced cyber capabilities. Proliferation may occur purposely and legally as a market process: for example, a law enforcement agency may purchase a licence for a phone-cracking tool, subject to local laws and import/export controls. Other proliferation may be legal but unethical or abusive: for example a law enforcement agency using licensed offensive-cyber tools in a way that either directly breaches human rights or facilitates later human rights violations. Proliferation may also occur unintentionally and/or illegally, including instances of software piracy.

Like-minded states have engaged in international discourse on offensive-cyber proliferation in a range of forums. In March 2023, a group of 11 states made a joint statement on the proliferation and misuse of spyware, with a further six signing in 2024. In February 2024, the UK- and France-led Pall Mall Process kicked off with a declaration on commercial cyber-intrusion capabilities, signed by a larger collection of states and regional bodies. As part of an ongoing and evolutionary process, further discussions on offensive-cyber proliferation are anticipated at future events, including in Paris in 2025.

This paper seeks to inform these ongoing deliberations by identifying how state “permissive” behaviours can contribute to the proliferation of offensive-cyber tools and services. “State permissive behaviours” refers to state action (or inaction) that directly or indirectly shapes offensive-cyber market conditions. For example, in this context, “behaviour” may include active diplomacy on behalf of firms (active), but it could also include insufficient guidance or regulation (inactive). The identification of these permissive behaviours is intended to facilitate constructive analysis and discussion about the scope for targeted interventions and recalibration of the market.

This is one of two papers on this topic. The other paper, authored by the researchers and published by Chatham House in October 2024, draws on the findings in this paper and identifies a range of “principles” that could be used to build a code of conduct that governments could use to counter irresponsible offensive-cyber proliferation.

Methodology

The findings of this paper draw on desk-based research on the international commercial offensive-cyber market. This research included non-academic sources (for example, news reports) and interdisciplinary academic literature. Given the dynamic nature of the commercial offensive-cyber market, relatively contemporary literature (sources published within the past five years) has been favoured. However, historical sources have been included where useful: for example, where contemporary equivalents are unavailable or need contextualisation. The desk-based research phase ran from November 2023 to early March 2024 and mainly covered literature from 2019 to March 2024. The findings for this research were validated by a half-day data-validation and consultative workshop with commercial offensive-cyber industry stakeholders held in person at Chatham House in March 2024. The workshop drew on the expertise and insights of 44 participants, predominantly based in the UK, the US and Western Europe. Attendees represented a broad spectrum of the commercial offensive-cyber industry, including developers, brokers, contractors and government entities. To facilitate candid discussion, remarks made at the workshop are not attributable, and the identities of participants are not referenced here.

Limitations

Some limitations of this research should be noted. First, the diffuse and dynamic nature of the commercial offensive-cyber market means that the endeavour to succinctly map and analyse a wide range of indicative behaviours may lack depth and nuance. A single paper could focus on any one of the behaviours detailed here. Second, although the research team significantly revised the paper and its framing(s) following the validation phase, the dynamic nature of the market – as well as ongoing national and international efforts to calibrate market behaviours – means that some findings may become outdated. Third, the paper focuses on analysing the state of play in market behaviours that influence proliferation. The paper does not identify possible interventions. “Principles” of possible interventions are proposed at length in the companion paper published by Chatham House. However, further interdisciplinary research from academics, researchers, policy stakeholders and industry could assess and substantiate this paper’s findings, and identify opportunities for calibration or intervention in the market.

The paper has two chapters. Chapter I offers an overview of the offensive-cyber industry. Chapter II draws on examples of state permissive behaviours that interact with proliferating market practices. The paper concludes with reflections on the challenges and opportunities that these permissive behaviours present for policy interventions.

I. Overview of the Offensive-Cyber Industry

The commercial offensive-cyber industry is an overlapping supply chain and can be divided into different operational elements. For example, research from 2021 distinguished between “vulnerability research and exploit development”, “malware payload generation”, “technical command and control”, “operational management” and “training and support”. These capabilities are common across both legitimate (security testing and law enforcement) and illegitimate or malicious activity, and it is not possible to draw a distinction between these activities at a purely technical level. In addition, these elements are linked, rather than existing in a vacuum, with resources and knowledge shared or exchanged within and between market actors. A single “zero-day” chain may draw components from multiple sources and may itself operate as a component of a product comprising a broader set of capabilities. The supply chain is highly complex and interdependent. Figure 1 presents a simplified representation of the commercial hacking market. Table 1 complements this figure, highlighting contextual characteristics of different tools and services. It is also a simplified representation of a complex market. For example, although the target market for surveillance capabilities is likely to be government entities, there are secondary markets, such as the mass and criminal markets.

image01 ▲ Figure 1: Complexity and Interdependence of Commercial Hacking Markets

image02 ▲ Table 1: Contextual Characteristics of Goods and Services of the Commercial Hacking Market

Additionally, it should be recognised that there is significant scope for supply chain and/or operational overlap between market actors (government agencies, commercial firms and criminal entities). Ultimately, this interdependency, combined with the inherently international nature of the market and the intangible nature of software products, arguably provides fertile ground for proliferation.

The purpose of this paper is to identify state permissive behaviours (SPBs) and non-state proliferation factors (NPFs). To do this, it is necessary to outline a clear and accessible understanding of which market phenomena are in scope. This is achieved by: defining cyber intrusion capabilities; narrowing the focus to “commercial” practices; and drawing a dichotomy between “authorised” and “unauthorised” cyber intrusion.

This research project was an evolutionary process. The description of the markets that appears in this paper is the final version that was decided on by the research team following both the data-validation workshop in March 2024 and follow-up consultative discussions with industry stakeholders. The research team recognises that providing an overview of such a highly diverse, dynamic and inter-/intra-layered industry in the broad-brush strokes necessary for a research paper is, in part, a process of simplification that may not capture fine-grained nuance. Nonetheless, doing so offers a common framing that is intended to be accessible to a wide audience and serves as an important foundation for subsequent analysis. This framing is also applied in the companion research paper, published by Chatham House, which outlines principles for international intervention.

The three stages of framing are:

  1. Scoping cyber-intrusion capabilities to varied components, which contribute to the ability of an actor to gain remote access to a target host or network. The components include: vulnerability discovery and exploitation; development of a functional exploit for a vulnerability; the technical infrastructure for command and control of that malware; and broader training and support.

  2. Focusing on commercial activity where at least one of the components is obtained through a financial transaction. This could include, for example, the sale of a product or the provision of licensed services. Commercial cyber-intrusion capabilities may be traded either on the open market or to exclusive clientele. A range of actors are involved in the commercial offensive-cyber ecosystem, including developers, vendors, service providers, brokers, resellers and system integrators. All of these actors are in scope in this paper. Their activities take place in an interlinked nexus, which may be mapped from the point of “initial research”, where raw novel exploits are identified, to implementation, and finally, in some cases, to a product that a client can acquire and use. This nexus is both nuanced and complex.

  3. Distinguishing between authorised and unauthorised cyber intrusion. In cyber security, tools developed and sold primarily for defensive testing purposes can often be abused for offensive purposes. Likewise, activities undertaken for security research or good-faith hacking services are often the same as those undertaken for more malicious purposes. As such, a distinction drawn on how tools or services operate will be moot, and will not help understanding of SPBs and NPFs. Rather, use cases must be examined to better understand behaviours in the market, which leads to a distinction between authorised and unauthorised cyber intrusion.

Authorised intrusion takes place with the permission of the owner/lessee, operator or manufacturer of a device, system or network. An example could be corporate penetration testing, participation in a bug bounty program, or employee workstation surveillance tools. If the use of a cyber-intrusion capability is not approved by the owner/lessee, operator or manufacturer, it is categorised as unauthorised. There are exceptions, particularly with regard to national security and law enforcement operations. For example, in the UK, a minister and a judge may authorise offensive-cyber operations.

Although this paper categorises the landscape into authorised and unauthorised markets, in truth there is a vast array of products and services included in the ecosystem, and a range of attitudes, approaches and priorities among vendors. Tools and services offered specifically for unauthorised intrusion are most often the ones that are discussed in research disclosures, news stories or lawsuits relating to malicious campaigns against journalists, activists or governments. Tools and services in this market are typically designed to avoid detection. For this reason, and due to the go-to-market dynamics discussed in Chapter II, vendors in this market tend to operate in relative secrecy. As a result of all these factors, many public sources on proliferation and abuses of commercial cyber tools focus on just a few vendors and tools. This is reflected in the references quoted in this paper, and while the researchers believe the points highlighted are also true beyond the limited examples cited, the hidden nature of this market makes it challenging to present broader public examples.

Understanding the diversity of the two markets is critical to examining the dynamics at play, and potentially developing mitigation strategies or policy proposals. While the examples and public analysis in this paper often focus on the more extreme and secretive vendors and offerings, it is important to recognise that they are only one part of the picture. These markets also include well-established defence contractors, and cyber-security vendors offering broad market offerings. These organisations are themselves highly varied, generally operate in legitimate markets and are answerable to investors, customers and employees. These organisations are more likely to adopt new, or adapt existing, behaviours in response to calls for more responsible approaches to the sale and development of commercial cyber tools.

II. The Role of Permissive Behaviours in Commercial Offensive-Cyber Proliferation

The proliferation of offensive-cyber tools is fed by supply and demand. Given the relative novelty of offensive-cyber activities and their relationship with states, the proliferation is also dependent on a range of “permissive behaviours”. These are behaviours that encourage or stimulate other actors to enter or expand the market. The research for this paper scopes these permissivebehaviours to states, focusing on the interaction of a range of SPBs in relation to NPFs. The paper also addresses behaviours of other stakeholders, for example vendors, but only inasmuch as they reflect the permissive behaviours of states.

Some caveats and clarifications are needed. First, both supply and demand factors are in scope. Second, the term “permissive behaviour” is not used as a synonym for “negative behaviour”, nor is there an implication that behaviours will always lead to harmful or unwarranted proliferation. Third, permissive behaviours can refer to both action and inaction. Fourth, it should be noted that while the behaviours described have been observed and documented, this does not imply that they are widespread across all stakeholders in each category. In some cases, behaviours will likely be associated with only specific types of stakeholder; nevertheless, their impact is significant enough to be worth noting. Last, it should be acknowledged that a degree of permissive behaviour is required for the offensive-cyber market to be viable. As states seek to identify ways to scrutinise and/or regulate the proliferation of offensive-cyber tools, it may be assumed that “fully permissive” or “fully restrictive” regimes would not be desirable.

Context is important. It is necessary to understand the crossover and functional overlap between behaviours and market actors. Additionally, while behaviours may be individually influential, it is likely that the cumulative effect of multiple permissive behaviours can markedly increase scope for unchecked proliferation and irresponsible behaviour. NPFs and permissive behaviours are categorised into five distinct areas:

  1. Regulation of corporate structure and governance.

  2. Legal frameworks for product development, sale and transfer.

  3. Diplomatic support and engagement.

  4. Development of cyber-security ecosystem and workforce.

  5. Integration with defence and security industrial base.

These areas are not comprehensive, but in this paper they capture areas of potential proliferating interaction between market actors. For each area, the research examines whether states play roles as customers, investors, detectors or regulators, for both authorised and unauthorised markets. The research highlights examples of NPFs and SPBs. Again, these are not comprehensive, as they are used as examples of proliferating conditions or activity (see Table 2). The objective is to demonstrate the link between an NPF and an SPB, rather than to list all possible NPFs and SPBs. Additionally, the paper identifies pre-existing global “indicators” (for example, indexes) that may be applied in future research to provide substantive granular comparisons between state and market practices across a range of states.

image03 ▲ Table 2: Types of NPFs and SPBs

Area 1: Regulation of Corporate Structure and Governance

NPF1: Offensive-cyber firms may be operating with limited internal checks or balances on sales.

SPB1: Inadequate regulation or enforcement of corporate ethics and corporate social responsibility procedures.

Notwithstanding limited and relatively recent export controls, the desire for continued opacity (from some buyers) has enabled an environment in which there is a continued lack of supply chain transparency among offensive-cyber developers, vendors and markets. In a context of generally poor transparency across the market, firms are disincentivised to increase oversight, as this could reduce sales and/or disrupt supply chains.

Of an indicative sample of eight offensive-cyber firms (of those with a public presence), there is a wide degree of variation in transparency regarding oversight of sales and use of services or tools that may be used for unauthorised intrusion. Some firms have no ethics statements, while others have ethics statements that concern issues such as the environment and/or modern slavery in their supply chain, but do not have public-facing policies on their products or services. Some firms may claim that they have oversight of clients in some instances, but claim they have no oversight in other cases.

More transparently, firms may place example end-user licence agreements on their webpages, which can include outcomes such as cessation of service or refusal to renew a contract. Other firms produce transparency and human rights reporting or statements, including indicative data on refused sales, the composition of their external ethics committee, or mention of internal review processes on the use of their products. However, such reports contrast with allegations of use of offensive-cyber products in human rights abuses. Experts have argued that such documentation does not constitute “a transparency report in any meaningful way”.

It is not possible to conclude that there is an inevitable ethical race to the bottom. Firms – particularly those operating relatively openly – will operate degrees of self-regulating oversight either because they believe it is the right thing to do, or as a means of avoiding negative consequences. These could include bad press, loss of investor or customer confidence, and impacted staff morale. However, beyond overt export control embargo lists and sanctions regimes, firms do not have clarity on how they should sell, and who they legally can, but should not, sell to or purchase from. Grey areas exist beyond the usual suspects – such as Iran, North Korea and Russia – from whom purchase requests are from law enforcement agencies in strategic-ally states where there may be recorded human rights breaches. In these instances, exporting states may be reluctant to provide clarity in writing and there may be a perceived risk of destabilising higher-priority diplomatic engagements. This links to a behaviour discussed later, SPB5.

NPF2: Complex corporate structure crossing multiple jurisdictions.

SPB2: Lack of transparency on corporate ownership and transnational subsidiaries.

Like many industries, the offensive-cyber industry has benefited from multi-jurisdictional spread, for instance drawing on tax and oversight regime divergence to increase potential for corporate secrecy. However, there is also scope for additional complexity in corporate structure for the purpose of obfuscating activities, ownership or clients, or sidestepping legal boundaries. For example, firms may operate a range of different company names in different jurisdictions. Additionally, personnel, senior leadership or investors may be involved in more than one commercial offensive-cyber firm, and may operate in fluid locations. As one regime tightens oversight or restrictions on a particular individual, that individual may move to a less rigorous jurisdiction. Feasibly, this fluidity could engender a transfer of intellectual property between firms, and firms themselves may be distinctly transitory. This may include sensitive and strategically significant intellectual property relating to advanced unauthorised intrusion capabilities. Senior leadership in firms that are blacklisted may create new – ostensibly different – firms to continue product or service development and gain new sales. There is, arguably, significant scope for increased alignment in sanction regimes with respect to commercial offensive cyber, as well as more robust enforcement of existing multilateral regimes, such as EU regulations. Beneficial ownership requirements could be enforced to counter shell corporation structures.

Two suggested global indicators may be used for monitoring the NPFs and SPBs associated with Area 1:

  1. The level of compliance with the UN’s Guiding Principles on Business and Human Rights.

  2. Scores on the World Bank Governance Index – Regulatory Quality.

These could be used by stakeholders to benchmark and assess the degree to which states and offensive-cyber firms design and implement successful human rights oversight mechanisms.

NPF3: Offensive-cyber firms may sell to high-risk countries or inappropriate domestic actors.

SPB3: Inadequate export controls, internal and/or insufficient training or guidance.

As noted in NPF/SPB1, there is a lack of meaningful clarity internationally regarding the “responsible” sale of offensive hacking tools and services. Meanwhile, there is a wide-ranging catalogue of allegations regarding the sale of offensive-cyber tools to state entities for uses that may be in breach of international human rights obligations. It is possible that there is nuance regarding legal but irresponsible cyber proliferation. For instance, a state may design a legal oversight regime for the export of offensive-cyber tools – unauthorised surveillance software – for the purposes of counterterrorism. However, the impracticality of an international definition of “terrorism” has become a cliché in international politics. In some jurisdictions, “terrorism” or “national-security risk” may be interpreted sufficiently broadly to include non-violent political opposition and civil society groups.

Additionally, in the absence of guardrails, and the failure of existing mechanisms such as the Wassenaar Arrangement, there is a risk of intentional or unintentional seepage proliferation from both private and state actors. For example, there are instances where hacker-for-hire firms, including those offering services to private investigators working on behalf of individual clients, have reportedly gained access to the NSO Group’s Pegasus software, and have claimed that they can spin up their own control centre to covertly snoop on a target’s digital presence.

It is also possible that state entities may gain legitimate access to offensive-cyber tools but leak these tools to other agents for illegitimate or illegal use. It is of course also of note that the use of a zero-day exploit against a target can also expose the exploit – for instance, through forensic investigation – to the target entity and/or others. Use, therefore, can pose a proliferation risk.

Combined, these issues highlight two distinct but linked regulatory or oversight gaps: there are arguably inadequate access control and oversight measures across the market; and there is a lack of cohesion in tools or processes that could be used to restrict irresponsible proliferation.

NPF4: Vulnerability researchers may choose to sell to black or grey markets.

SPB4: Lack of effective vulnerabilities equities process, notification or disclosure processes, and/or insufficient legal protection for researchers.

The activities undertaken by vulnerability researchers are often indistinguishable from those undertaken for malicious purposes. Both sets of actors look for vulnerabilities or configuration issues in third-party computing systems that can provide opportunities for access. Outside bug bounty programs, independent researchers will typically conduct their investigations without the knowledge or authorisation of the system owners or end-users. As a result, many researcher activities may be unauthorised and possibly illegal under a jurisdiction’s anti-hacking laws.

Few states have legal carve-outs or exemptions that protect or support security research activities. In some jurisdictions, anti-hacking laws not only expose researchers to risk of criminal prosecution, but also contain civil causes of action that enable technology manufacturers to bring lawsuits against researchers. This legal environment imperils researchers, and, in some cases, incentivises them to go underground, selling their findings to brokers who will not ask questions, or governments that will not prosecute them, rather than disclosing them to the relevant technology manufacturer or operator.

In addition, the dynamics regarding exploit sale and/or notification may be inadvertently incentivising unmoderated proliferation of offensive-cyber tools. Vulnerability researchers who discover new vulnerabilities and develop new exploits may choose to sell their knowledge to private firms, vulnerability brokers or government entities. Researchers may be motivated to provide their knowledge to particular actors on the basis of a range of diverse factors, including reputation, alignment and pricing.

Buyers and platforms may have ranging payment values and structures. Receivers of exploits may not perform any form of vetting of the sources of exploits, beyond determining that the exploit itself is valid. This lack of oversight could be in response to market pressures – the preference of sellers to remain anonymous – or due to the receiver’s own preference or resource constraints. Additionally, as an emerging trend, exploit developers may gain greater income by maintaining their monopoly on their novel exploit and selling their access “as a service”, reducing scope for the exploit to be traded, identified and rectified.

Exploits may be framed simultaneously as both high-value private and public good commodities. Similarly, national security agencies may be incentivised to hold and deploy novel exploits for offensive operations, rather than share their knowledge with the owner(s) of impacted systems. Some commentators have advocated for an international vulnerabilities equities process (VEP) that holds states accountable to specific standards, or sets boundaries for keeping or disclosing vulnerabilities. Even at a national level, existing VEPs are few, and hindered by contention between the interests of offensive-cyber stakeholders and defensive or privacy cyber stakeholders. Nonetheless, the absence of binding reporting, disclosure or bug-purchase mechanisms leaves the market in a relative free-for-all, enabling opportunities for proliferation to occur unchecked.

Four suggested global indicators that may be used for monitoring the NPFs and SPBs associated with Area 2 are:

  • Transparency International’s Corruption Perceptions Index.

  • World Justice Project.

  • Freedom House Index (Rule of Law).

  • Bertelsmann Transformation Index.

Although these global indicators are an imperfect solution, stakeholders could draw on them to contextualise proliferation across a range of jurisdictions. They are notably less direct and/or cohesive than those proposed for Area 1 and some other Areas. An overarching finding from this research is that existing proxy indexes have severe limitations. Put simply, it is not possible to use existing indexes or other open sources to meaningfully quantify the degree to which permissive behaviours have causal or correlative links to commercial cyber proliferation. This is an important dilemma. Stakeholders should consider how the Pall Mall Process and other initiatives can not only urge action against irresponsible offensive-cyber proliferation, but also form new processes through which activity can be monitored and assessed.

Area 3: Diplomatic Support and Engagement

NPF3: Offensive-cyber firms may sell to high-risk countries or inappropriate domestic actors.

SPB5: State actors may deploy offensive-cyber firms as a means of establishing or strengthening diplomatic relationships.

States with an economically and strategically significant presence of offensive-cyber firms may feel an incentive to promote proliferation in certain circumstances, where this feeds into broader diplomatic goals: for instance, maintaining strategic alignment or improving bilateral trade. It is notable that following the blacklisting by the US of offensive-cyber firms NSO Group and Candiru, both the Israeli government and the firms themselves lobbied Washington to reverse the decision.

Furthermore, given the strategic exclusivity of some offensive-cyber tools, including advanced spyware, firms can possibly be used as bargaining chips in a broader diplomatic context. There have been alleged instances where offensive-cyber firms have halted trading with some states, for example following negative press, but have later resumed trading after intervention from their own government. In this way, cyber proliferation may be used as a tool of soft-power projection.

It is also worth noting that, in some contexts, states may use diplomatic or economic levers to entice existing offensive-cyber firms to establish a presence in their territory. For instance, some states have incentivised companies through tax incentives and/or fast-tracked citizenship applications. As demonstrated by the case of Israeli QuaDream and its Cypriot partner InReach, such practices could be a tacit encouragement of SPB2 and could feed or stimulate NPF2.

NPF5: Hackers-for-hire may undertake unauthorised intrusion against third parties, on behalf of public relations firms, law firms or private investigative ecosystems.

SPB6: State entities may cover up, downplay or inadvertently encourage the use of hackers-for-hire.

Clients from the private and public sectors may commission the support of public relations and law firms, which in turn contract a hacking-for-hire entity for an offensive-cyber operation for which the victim has not provided consent.

Hacker-for-hire organisations may use an array of techniques, ranging from basic social engineering combined with open source hacking tools, to hacking with illicit copies of Pegasus. Targets of such unauthorised activity have reportedly included judges, investors, NGO figures and politicians. It is important to emphasise, however, that victims do not need to be public or high-profile figures. There are indications of hackers-for-hire conducting commissioned non-consensual hacking against romantic partners, landlords and competitor firms. Reporting indicates that this is a global issue, with mixed success for international prosecutions.

Although the commissioning of hacking is illegal in many jurisdictions, there may be some instances where a client or private investigator may wish to commission hacking because they perceive a window of opportunity for hacked material to be used in court, for example in a divorce case. This contradiction may encourage pseudo-legal or illegal proliferation of hacking activity. Despite previous UK government interest in regulating this sector, and the role of private investigators in facilitating historical hacking scandals on behalf of tabloids, UK private investigator activity remains relatively self-regulating.

Additionally, investigations have identified hacker-for-hire operators who offer intrusive and unobtrusive services to disrupt rival political campaigns; these include a firm promoted on an Israeli Ministry of Defense website. Amid potential controversy and embarrassment associated with such linking, government entities may refuse to comment. Entities in the offensive-cyber industry that are subject to negative reporting may use anti-libel law firms and courts to prohibit the production and dissemination of such reporting. Combined, these behaviours indicate possible ways in which state apparatuses can be used overtly or tacitly as vehicles to obfuscate proliferating practices.

Three suggested global indicators may be used for monitoring the NPFs and SPBs associated with Area 3:

  • National Cyber Security Index (diplomatic engagements).

  • UN open-ended working group positions on sharing of technologies and tools as part of capacity-building programmes.

  • UN Ad Hoc Committee on Cybercrime positions on criminalisation carve-outs for security researchers.

These indicators may be used by stakeholders to analyse the degree to which different states are overtly or tacitly facilitating commercial cyber proliferation through promotion and/or enticement of market actors through diplomatic and/or economic channels.

Area 4: Development of Cyber-Security Ecosystem and Workforce

NPF6: Unrewarding legal avenues for individuals to use hacking skills.

SPB7: Lack of engagement or regulation to promote adoption of bug bounty programs, hackathons, “capture the flag” contests, or other forms of paid vulnerability research.

Vulnerability researchers have a range of ethical avenues for disseminating the vulnerabilities or exploits that can enable – although do not constitute – offensive capabilities. These avenues may include actively or retrospectively permissioned vulnerability disclosure programs, some of which may be fee-based bug bounty programs or commissions from software or service providers. However, there is a lack of government guidance on or regulation of how buyers should operate these types of programs, particularly to create greater security and awareness for technology users. This is an unresolved regulatory gap that feeds ambiguity and creates space for crossover between defensive and offensive research.

Payment agreements may include non-disclosure agreement (NDA) clauses; in essence, the security researcher takes a payment in exchange for their silence after they share their knowledge of an identified vulnerability. Acceptance of the NDA can be a prerequisite for both payment and “safe harbour”: in other words, an agreement by the organisation not to seek prosecution for the hacking. This may be in addition to other vetting procedures, such as ID and banking checks. The rationale for an NDA is understandable; the “buying” organisation is placing a monetary value on exclusive knowledge and will not want the vulnerability to be broadcast to other actors until it is rectified. In some cases, vendors may decide not to disclose the vulnerability at all, for example to avoid reputational impact. It is also possible that the vulnerability may not be fixed altogether – “buying” the details could be viewed as a partial solution to the vulnerability itself. This lack of transparency may frustrate vulnerability researchers and provides an opportunity for them to break the NDA – unbeknownst to the technology manufacturer – to further monetise their findings by secretly selling the information to a vulnerability broker.

At present, where formal and legal buyers develop a reputation for “sitting on” disclosures or making incomplete payments to researchers, the researchers may be motivated to sell to grey-market entities or otherwise publicly broadcast the vulnerability. As such, regulation that encourages public disclosure may be useful as a form of expectation management for both parties. Public disclosure also encourages mitigation or patching of vulnerabilities, reduces risk for users of the technology, and negates the value of the vulnerabilities in the grey or black markets.

On the other hand, government intervention in disclosure processes should be carefully calibrated to avoid unintended consequences. There are suggestions, for example, that disclosure laws in some jurisdictions, including recent developments in China, may reduce public disclosure. This may create possible security dilemmas where government entities hold vulnerabilities that remain unpatched.

NPF7: Computer science students may not receive sufficient training in ethics or law.

SPB8: Possible gaps in cyber-security and STEM education policy.

An increase in the number of computer science graduates across major economies has been widely reported as necessary for both economic prosperity and cyber security. However, the increased number of skilled computer science graduates has implications for offensive-cyber proliferation. Notwithstanding the emergence of cyber-security degrees, which may include content on ethics, commentators have emphasised that there may be opportunities to improve ethics training in computer and data science training programmes. Additionally, the offensive skills that a student may learn on an “ethical hacking” course for defensive purposes are transferable to offensive markets, including unauthorised activity.

Given a potential pay disparity between defensive and offensive markets, graduates may be incentivised to offer their skills to offensive operations, even in cases where these may be illegal or pseudo-illegal. This may be exacerbated in regions with high unemployment, or where technical roles may be more limited in pay or scope. In these cases, technical employees may work a primary job and supplement their incomes with hacking or exploit research on the side.

Ethical hackers may acquire validation through Certified Ethical Hacker, Offsec Certified Professional or other certifications, and clients may use these to filter providers. However, without guidance and guardrails, the voluntary nature of certification gives hackers space to use the adjective “ethical” even if they offer services or use methods that may be in breach of computer misuse laws. Given the international nature of the market, it is also important to note that “legal” and “ethical” are distinct, and can also vary significantly across jurisdictions. Practices that are legal in one country may be illegal and/or unethical in another.

Four suggested global indicators may be used for monitoring the NPFs and SPBs associated with Area 4:

  1. International Telecommunications Union (ITU) Global Cybersecurity Index.

  2. ISC2 Cybersecurity Workforce Study 2023.

  3. Global Cyber Security Capacity Centre, University of Oxford, Cybersecurity Capacity Maturity Model (Section 3).

  4. HackerOne top participant countries.

These indicators may be used by stakeholders to appraise the status of and possible issues within the cyber-security workforce, and education and training systems in a range of jurisdictional contexts.

Area 5: Integration with Defence and Security Industrial Base

NPF8: Competitive pressures on existing defence companies to develop offensive-cyber tools.

SPB9: Application of unwieldy or inappropriate export and security policies to new technologies.

As noted regarding NPF3/SPB3, export licence regimes can be sufficiently malleable to enable relatively unchecked offensive-cyber proliferation. Additionally, it must be acknowledged that, due to their nature, offensive-cyber tools themselves hold a complicated regulatory position. A single cyber tool may include multiple components for propagation, exploit and payload, each composed of computer code that can be readily disseminated globally and reclassified. Where components are open source code, these may be protected under the First Amendment of the US Constitution. Complicating matters, there are suggestions that states may circumvent export restrictions by encouraging cross-hiring between international and domestic cyber-security vendors.

In this light, commentators have suggested that given the unique nature of cyber tools and the highly diffuse nature of the offensive-cyber ecosystem, “know your supplier” due diligence may be a pragmatic means by which suppliers and contractors can be regulated with the aim of reducing unchecked proliferation of unauthorised capabilities. Such oversight must account for the multilayered nature of supply chains, as it may be relatively common for arms suppliers to subcontract production of offensive-cyber tools. This increases the opportunity for parts of the supply chain to draw on grey-space exploit markets

NPF9: Revolving door between private companies and military/security positions.

SPB10: Uncompetitive remuneration and lack of controls on career trajectories, post-deployment travel or intellectual property restrictions.

Observers have noted that there is a revolving door between defence and the private offensive-cyber industry, although it should be recognised that the door operates across the whole of the defence sector.

Experienced personnel can draw on their advanced training, tactical skillsets, security clearances and networks to gain more lucrative employment in the private sector. It has been noted that in some national contexts, former defence personnel constitute a majority of founders of cyber-security startups, and research teams may be formed almost exclusively of former military or intelligence personnel. The porosity between private and defence activity should also be set against the backdrop of the challenge of pay gaps between the public and private sectors, with public entities often unable to match the higher salaries offered by the private sector. It is in this context that some states are seeking to loosen controls on career trajectories, giving space for personnel to move more seamlessly between defence and private sector positions.

The revolving door also has ramifications for cyber proliferation. Private firms, motivated to grow and generate revenues, will be incentivised to offer their services and products to a diverse array of clients across intrusion markets. This creates potential space for advanced techniques or products to filter to lower market tiers (for example private hacker-for-hire firms) and less restrained state actors. Concerted thought should be applied to how the benefits and risks of inter-sector movement can be managed through guardrails, restrictions and observation.

Three suggested global indicators may be used for monitoring the NPFs and SPBs associated with Area 5:

  • Stockholm International Peace Research Institute military spending databases (expenditure, company value and export value).

  • Global Organised Crime Index (law enforcement score).

Stakeholders may use these indicators to benchmark and contextualise the size and scope of national security and law enforcement markets in different jurisdictions. Although an imperfect and partial solution, these indicators can be used to provide insight into commercial offensive cyber against the backdrop of military–commercial cultures.

Conclusions and Implications for Interventions

This paper has identified that offensive-cyber proliferation is facilitated by a range of SPBs and NPFs. While those identified in this research are non-exhaustive, it is of significant note that there is clear inter-relationship and reinforcement between some of them. National and international endeavours to promote responsible use of offensive-cyber tools and services will need to be multifaceted and nuanced. Commercial offensive-cyber tools and services perform a vital role in authorised activities, for example, by enhancing cyber-security standards through penetration testing. Similarly, commercial offensive-cyber tools and services are intrinsically necessary to empower responsible state activity, particularly with respect to national security and law enforcement operations.

As states collectively seek to navigate their endeavour(s) to mitigate irresponsible cyber proliferation and maximise responsible behaviours, it is important to acknowledge and accommodate the multi-faceted activities within the offensive-cyber ecosystem. In instances of concerted irresponsible behaviour, sanctions and blacklisting may be useful tools. However, as identified in this paper, firms, investors and personnel may be motivated to evade such regimes, enabled by the fluid nature of cyber products and services, and incentivised by possible complicity of some state actors. The building of international consensus is thus important to increase the costs of business for the lowest-common-denominator actors.

More broadly, the building of international consensus is also vital to feed into softer intervention approaches that can draw on the goodwill and societally beneficial intentions of responsible commercial actors. The lowest-common-denominator actors are impactful and significant, but they are not representative of the industry at large. Industry actors, from researchers and brokers through to sellers and consumers, engage in this arena because they want to improve cyber security and shut down illegal activity (among many other positive motivations). In these contexts, such actors may welcome rationally designed and carefully calibrated guidance from the community of state jurisdictions in which they are based, to which they sell, and from whom they source.

Additionally, the research for this paper reinforces the need to view (ir)responsible cyber proliferation through a “whole-of society” lens. Although the inclusion of publicly accessible hacker-for-hire services and advanced state-oriented offensive capabilities into one research paper necessitates potentially problematic broad-brush strokes, this has fed into an important finding: offensive-cyber proliferation is porous. A cyber capability developed for state actor use may, in time, find its way into private hacking commissioned by everyday citizens. There is a clear and tangible risk that this democratises irresponsible and criminal activity. This reality raises both the stakes and the necessity for action by states that are willing to encourage, enforce and demonstrate responsible behaviours.

Opportunities for impactful policy interventions at national and international levels may be seen as a two-stage process.

  1. Contributions such as this paper help to develop the breadth and depth of understanding of the scale of offensive-cyber proliferation. In the absence of clear toolkits for further understanding, the paper has proposed a range of possible open source indicators across the five areas of NPFs and SPBs. In principle, these could be adapted or combined with other data points for use in future monitoring. Stakeholders could produce heatmaps of proliferation risk across differing jurisdictions and markets. As previously noted, however, these open sources are imperfect. As like-minded states continue to identify and consolidate positive practices while mitigating irresponsible offensive-cyber proliferation, it will also be necessary to consider how multi-stakeholders can implement trusted measures of assessment and monitoring.

  2. Building on these monitoring efforts could lead to assessment of the feasibility and efficacy of national and international interventions. Some key examples could include:

    2.1. International government coordination and collaboration on norms and regulation. Elements of the offensive-cyber ecosystem have shown that they are geographically and structurally flexible. Existing controls, such as the Wassenaar Arrangement, are insufficient to keep offensive-cyber proliferation in check. Additionally, future internationally-agreed control mechanisms are likely to be achieved on a long-term, rather than short-term, basis. Combined, these factors mean that states should collectively continue to explore immediate unilateral mechanisms. International stakeholders from government, industry and the third sector should monitor and observe existing unilateral controls, such as the US sanctions against NSO Group and Candiru. Looking ahead, broader mechanisms could include export restrictions, limitations on lobbying and oversight of contracting. The recently started Pall Mall Process is an ideal forum through which national governments and regional bodies can push for greater inter-governmental alignment and collaboration on responsible behaviour regarding offensive-cyber proliferation. While it is promising that like-minded stakeholders are convening to discuss issues relating directly to offensive-cyber proliferation, it is notable that many states and actors have not necessarily (yet) been represented. There may be a challenge in the endeavour to widen participation while maintaining meaningful principles. Nonetheless, where consensus is achieved, a limited coalition of influential states may still be able to influence the wider market where universalism is unobtainable.

    2.2. Requirements for greater vendor transparency, governance and accountability. The opacity of many offensive-cyber firms and marketplaces, which encompasses obfuscated or confusing corporate structures, legal liability and public presences, may be helpful in certain environments, such as with core national security-oriented activity and criminal enterprise. This nebulous composition is highly problematic from the standpoint of countering or restricting unchecked offensive-cyber proliferation. Given the diffuse and inter-related nature of the ecosystem, it is unlikely to be possible to counter proliferation meaningfully without responsibly increasing transparency and oversight across commercial intrusion markets.

    2.3. Adoption of robust vulnerabilities equities and counter-proliferation processes. The potential for seepage of advanced offensive-cyber capabilities should be a serious concern, in addition to indications that lower-tier hackers-for-hire may have been able to gain access to illicit copies of Pegasus. These cases should be seen as tangible, real-world examples of what happens when cyber proliferation goes wrong. Policymakers and stakeholders must assume that exploits developed for legitimate national security purposes can, and occasionally will, leak into the wrong hands. This is a rallying call for counter-proliferation efforts.

    2.4. Increased support and protections for vulnerability research. The vulnerability market ecosystem is vital. Currently, a range of implicit incentives may be feeding proliferation to the grey market – where exploits may be acquired by a range of actors for offensive purposes – rather than prompting cyber-security improvements. Serious concerted thought should be given to how the balance can be meaningfully tipped in favour of defensive-first cyber-security research. This could include greater legal recognition of the necessity for vulnerability researcher disclosures, engagement and support for the development of responsible market behaviours, which may include adherence and transparency for VEPs.

These suggested interventions are merely a starting point. As is often the case in discussions relating to cyber security, there are no simple solutions, or solve-all mitigations. While there are unlikely to be easy wins to mitigate unchecked offensive-cyber proliferation, it is vital for national, societal and economic security that serious efforts continue. As the research for this paper shows, some governments are now leading on international initiatives. Their success depends on the involvement of various groups from wider society in the design and implementation of interventions to tackle this complex market.


Gareth Mott is a Research Fellow in the Cyber research team at RUSI. His research interests include governance and cyberspace, the challenges (and promises) of peer-to-peer technologies, developments in the cyber risk landscape, and the evolution of cyber-security strategies at micro and macro levels.

James Shires is the Co-Director of both the European Cyber Conflict Research Incubator (ECCRI CIC) and the European Cyber Conflict Research Initiative (ECCRI).

Jen Ellis works to reduce cyber risk for society. Partnering with security experts, technology providers and operators, civil society and governments, she creates greater understanding of cyber-security realities and promotes collaboration to advance adoption of security strategies and practices. Jen serves on the UK Cabinet Office’s Government Cyber Advisory Board and the Department for Science, Innovation and Technology’s Cyber Resilience Expert Advisory Group.

James Sullivan is the Director of the Cyber research team at RUSI. He founded and has grown a research group at RUSI that explores topics such as the role of national cyber strategies, the cyber threat landscape, cyber security and risk management, commercial cyber proliferation, offensive cyber, cyber statecraft and diplomacy, and ransomware.

Jamie MacColl is a Research Fellow in the Cyber research team at RUSI. His current research interests include ransomware, the UK’s approach to offensive cyber operations, cyber insurance and the role of private companies in global cyber governance. He has led a range of public and private projects for RUSI, with a particular focus on UK cyber policy.

Made with by Agora