Cybersecurity Platformization

Uncovering Articulation Work in Bug Bounty Platforms
Jean-Christophe Plantin and Louise Marie Hurel | 2025.05.27
The article investigates the articulation work i.e., the coordination and adjustment needed for collaboration, that the platform model generates.
Introduction
Critical scholarship has long revealed the fallacy of platform companies’ position as a “neutral intermediary” between two or more parties. The pioneering work of Tarleton Gillespie showed instead how this stance acted as a rhetorical device to entice the various parties to join the platform, while serving the interest of the platform owners. Ample scholarship has since revealed the many interventions of platforms across sectors e.g., by actively curating online content, creating new forms of dependencies within several cultural industries, and acting as de facto employers in many industries.
This scholarship uncovers the agency of digital platforms by focusing on their constraints on people and industries; conversely, it shows how platform users resist, mitigate or subvert these constraints. However, equally important is the power that platform companies derive from their absence of constraints: Huber and Pierce characterize platforms as a structured but intentionally flexible framework which they call “an empty shell”. In their study, they show how such a loose structure can be as powerful as rigid rules to shape the activity of platform users: the health workers they analyze develop various forms of articulation work to compensate for the absence of clear rules from platforms.
We apply the concept of the “empty shell” to platforms and similarly reveal the articulation work they generate. We specifically look at bug bounty platforms i.e., companies selling bug bounty programs (henceforth, “vendors”). In such programs, independent security researchers (or “hackers”) report security vulnerabilities (or “bugs”) to organizations for a reward (or “bounty”). While bug disclosure has a much longer history, it is increasingly organized by a few large vendors (such as HackerOne or Bugcrowd) that apply the platform model to cybersecurity, putting into contact independent cybersecurity researchers with clients, and monetizing this intermediary position.
Existing research on bug bounty platforms focuses on the vendors’ constraints on the work of hackers: they shape their labor conditions and effectively turn them into “gig workers”. We complement this scholarship by focusing, first, on the other side of the platform, i.e., not the labor supply side (the hackers), but the currently under-researched labor demand side (the client of the program). We focus specifically on the employees of the clients managing the bug bounty program (henceforth, the “manager”). Second, following Huber and Pierce, we reveal how these workers react to and perceive the absence of platform’s constraints: looking at bug bounty platforms as an “empty shell” reveals the plurality of articulation work that managers provide to ensure their program is successful. We describe in the remainder of the introduction how we engage with scholarship on bug bounty, how we leverage the theoretical framework on articulation work, our research question, and our methodology.
Literature review and theoretical framework
Bug bounty “as a platform”
Large vendors of bug bounty programs, such as Bugcrowd and HackerOne in the United States, and YesWeHack and Intigriti in Europe, all emerged during the 2010s. The history of bug bounty has already shown how this vendor-driven model emerged following a decades-long shift of the professional and legal statuses of cybersecurity researchers. A few organizations experimented with this form of vulnerability reporting throughout the 1990s and 2000s (such as Netscape and Mozilla), before Meta and Google put bug bounty into the spotlight in the early 2010s by releasing their highly-publicized programs. Throughout the 2010s, the model reached the mainstream when Etsy, GitHub, and Microsoft also launched their own programs. Bug bounty gained respectability in 2016 with the U.S. Department of Defense’s widely reported “Hack the Pentagon” event.
Large bug bounty vendors apply the platform model to bounty hunting by monetizing their brokering function between the two parties. In this context, scholars have extensively revealed the detrimental implications this model has on hackers’ labor. Large vendors have effectively commodified vulnerabilities by taking control over the disclosure schedule and reward. Moreover, they have created a bug market that is highly stratified and power-law distributed, with few researchers reporting frequently and for a high reward, while the vast majority report little and for little to no compensation. As a result of this market structure, “skilled bug bounty hunters rarely make a good living by Western standards”. In other words, bug bounty vendors transform security researchers into “gig workers,” a form of labor characterized by casualization, informalization, and precarity.
Scholars of bug bounty have so far overwhelmingly focused on the labor supply side, that is, on hackers. What is still missing is an equally rich analysis of how the platform model shapes the work of the client to manage their program. Moreover, the existing focus is on the multiple constraints that such platforms create. Following Huber and Pierce, we show that the absence of constraints in bug bounty platforms is equally important to shaping how program managers work, which we analyze here using the concept of articulation work.
Articulation work and computer-supported cooperative work
Articulation work originates from the sociological analysis of professional practices and work situations, and is central to the field of computer-supported cooperative work (CSCW). The concept reveals the plurality of forms of work, the involvement of workers, and the various levels of visibility attached to different forms of work. Strauss first defined it as the work that is involved in organizing both tasks and the relationship between workers.
The strength of the concept is to link the general goal of a project and the many tasks required to complete it. Strauss names the former the “arc of work” i.e., the complete set of tasks organized both in sequence and in parallel throughout the trajectory of a project. In this view, completing such a project requires the integration of “tasks, clusters of tasks, and segments of the total arc”. The person ensuring that all the sub-projects eventually align toward the completion of the project is said to provide articulation work.
Subsequent scholarship has extended the concept by emphasizing the plurality of elements constituting articulation work. Lucy Suchman mentioned the ongoing effort to integrate fragmented elements – such as organizations, professional practices, and technologies – into working configurations. Similarly, she mentioned that articulation work requires “diverse discursive and material, human and artifactual elements” that must be integrated to build “stable organizations and artifacts”. The articulation workers align all these elements toward a coherent end.
While there is a paucity of scholarship in platform studies using the concept, the work of Huber and Pierce is enlightening as it reveals the forms of articulation work that a platform architecture generates. For them, the looser the rules of a platform, the more articulation work it requires from workers to conduct their activity, e.g., a health practitioner must compensate for the missing “professional norms, regulations, and managerial expectations” on teletherapy platforms. In this article, we similarly apply the concept of articulation work to bug bounty platforms, and ask: How do bug bounty platforms shape the work of managers at the client’s side?
Methodology
We conducted 13 semi-structured interviews with employees in charge of managing the bug bounty program at their current or past company. Twelve respondents worked at private companies and one at a non-profit organization. All respondents except one had contracted with one of the four major bug bounty vendors (Bugcrowd, HackerOne, YesWeHack, and Intigriti). Most respondents contracted with one vendor only, and one contracted with two vendors. Our respondents either managed public or private programs, or a mix of both.
While we use the generic terms of “program manager” or just “manager” in this article, our respondents identified with several different titles. Some managers worked full time on managing the bug bounty program, while others did it along other responsibilities. They worked at companies located in Europe, North America, Australasia, and South America. This coverage is limited but is representative of the global geography of the bug bounty market, with the major vendors exclusively headquartered in the Global North and with only a limited customer base in the Global South. Despite our efforts to recruit non-male respondents, we ended up with only two female respondents, echoing the strong male dominance in the cybersecurity sector.
We recruited respondents in three phases. First, we relied on personal connections in the field to recruit two respondents to pilot the question guide. After revising the question guide, a call for respondents was published in a private bug bounty forum, and several other managers volunteered to be part of the study. We extended the pool by looking up authors of customers’ testimonies published on vendors’ websites and invited them to participate in the study via LinkedIn messages. Finally, we contacted additional program managers on LinkedIn that we found using variations of the query “bug bounty program managers.” We stopped the interview recruitment upon reaching data saturation after 13 interviews of managers working at 10 different companies.
We conducted all the interviews via Zoom between March 2022 and March 2023 after piloting the question guide twice. We designed our topic guide around key themes based on the existing literature, our past research on the topic, and our research objectives for this article. The 13 interviews were with one respondent each and lasted between 35:38 and 71:48 min. Respondents were not compensated. Interviews were conducted in English or French. Prior to the interview, we sent an information sheet and consent form via email, and we collected their signed consent. Two respondents requested oral consent, which was recorded prior to starting the interview. One respondent requested the interview not to be recorded but agreed to have the interview notes used in the analysis. Audio files were transcribed via a paid service, and all transcripts were anonymized. We then conducted a thematic analysis of the transcripts by proceeding both deductively (by coding for themes that we had already identified in the literature and our past research) and inductively (by coding for new themes that emerged from the transcripts). This thematic analysis took the form of a codebook that one of us created in NVivo and that contained the deductive and inductive codes.
Findings
To reveal the articulation work of managers, we first need to define the services that platform vendors provide to their clients, ordered here chronologically. Before the program starts, they advise clients on how to design and manage their program e.g., whether they want a public or private program. Other key parameters are the types of vulnerabilities clients will accept (the “scope”) and the type and level of rewards. To start the program, vendors leverage their database of independent researchers to connect them to clients and their programs. While the vendors’ outreach to hackers is very minimal in a public program, they directly invite researchers to specific private programs. Particularly appreciated by our respondents are background checks (basic identity check, presence on lists of sanctions, etc.) and payments (especially dealing with different tax systems and foreign currencies, etc.). Optionally, the vendors can also review the hacker’s submissions before passing the relevant ones to the clients (“triage”). This list reveals that platform vendors take care of the logistics of a program (program design, inviting researchers, payment and triage, etc.).
However, all the work necessary to animate the program – i.e., getting actionable reports of relevant vulnerabilities – is the responsibility of the program managers. Our study reveals, first, that this labor is complex and multifaceted; specifically, it requires four types of articulation work, which we describe in the first result section:
-
To obtain valid reports from researchers, managers must attract quality hackers and convince them to participate in the program;
-
They must manage the high number of low-quality submissions (the “noise”), usually by using the triage function of the vendor;
-
Prior to remediating the vulnerability, the manager must identify the owner of the compromised asset, and convince them to fix it;
-
The public disclosure of the bug usually requires a negotiation around the timing with the hacker.
▲ Figure 1. Articulation between main and sub-tasks in the bug bounty pipeline.
Each of these tasks constitute a sub-project, and it is the manager’s role to align them with the arc of the project. We describe what each of these four types of articulation work entails and what is at stake. In the second result section, we show that while this articulation work is additional labor for managers, they consider it a positive aspect of their work. Managers mostly enjoy the opportunity to interact with hackers, even though this can involve some adversarial behaviors. Managers are mostly satisfied with the “empty shell” that platform vendors provide and are not looking for the vendors to impose more constraints or to weigh in their relationship with hackers.
Four types of articulation work
Finding and retaining quality hackers
It was not hard for the managers we interviewed to obtain bugs reports via their programs, but it was harder to obtain relevant ones. Several of them designated the high volume of irrelevant bug reports as “noise.” Bug reports are classified as noise if they are outside the scope of accepted bugs, if they are duplicate, or just a report from automated scanning tools. Respondent 9 described this noise as “the worst problem for [them].” Managers attribute this noise to the high number of low-qualified hackers, as eloquently put by R12: “On the platform, you have many script kiddies that are just launching a vulnerability scan and providing a report.” Moreover, programs compete to attract the best hackers: “If you are a big beefy company,” as R3 says, “there will be people lining up at the door to get into your program.” Smaller companies, however, will have a harder time finding quality researchers for their programs. A first difficulty for managers is therefore to find competent hackers to join their programs.
While triage (described in point 2 “Managing noise via triage” below) allows managers to deal with the noise quantitatively, equally crucial is how managers deal qualitatively and preemptively with this issue. They create and maintain a network of hackers to attract the best ones to their program. To start with, program managers multiply venues for recruitment. They use their own channels and don’t rely solely on the pool of researchers that the vendors provide, for example in a private program. They use various social media as channels to recruit, such as WhatsApp or Signal groups, X, or bug bounty forum Slack channels – all of which are key “to create awareness about [their] programs,” according to R13.
Finding quality hackers sometimes requires clients to train them. For example, R7’s team visits universities “to find that next generation of talent.” They also give students “books and media posts and all the bug bounty community research” to get them started with bug hunting. Simply spreading the word might not be enough for some specific products that are complex to understand and therefore to test. Some managers like R6 must literally “train [researchers] to speed up the preconditions to start to report vulnerabilities” by providing “technical deep dives on design work and the componentry involved in software design,” while being aware that such preliminary and additional work might be a deterrent for some researchers.
After scouting quality hackers, managers must keep them engaged with their program. Program managers with both public and private programs can shift researchers from one pool to the next as a reward: “If we see a hacker from the public program […] being really active and sending us valuable reports, we immediately invite him in private, as a sign of appreciation” (R11). Private programs are invite-only and rewards are typically better. Managers can also contact researchers with whom they have already worked in past programs.
Another key factor to increasing engagement is the reward. While the reward structure is defined up front and communicated to the researcher via a severity scale, managers can be flexible to keep the researcher participating in the program. For example, since a frequent difficulty for triagers and managers is to understand the report (typically requiring a back-and-forth between them and the researcher), some managers will offer an additional reward for a high-quality report providing clear steps for quick replicability. Similarly, some will pay for out-of-scope bugs if they are of strong interest to the security team. Others like R6 can add a monetary bonus “just to reward the effort,” e.g., a detailed report, even for a duplicate vulnerability.
Non-monetary rewards are another common way for managers to keep researchers engaged. Since there is strong competition between programs, managers like R5 need to innovate beyond financial rewards “to make [researchers] feel that they’re part of a big thing that’s happening.” Swag (e.g., a piece of clothing with the name of the client’s company) is an important tool for reputation building, especially at public events. To some extent, they are more valued by researchers than monetary rewards, as R6 ponders: “Swag is one of the recognitions that we do, and I find that extremely useful. It’s really interesting to me that if you send someone a $50 gift card, or you send them a $20 hoodie, they’ll like the hoodie more than the gift card.”
Managing the “noise” via triage
A common way of dealing with the high volume of reports is to hire the service of a triage team. Vendors provide access to such a team as an additional paid service. Triagers are the first to receive the report and, crucially, they assess the severity level of the reported bug which corresponds to a reward structure (typically, the higher the severity, the higher the reward).
Since clients outsource this service to the vendor, one could assume that their relations with triagers are straightforward. Managers, however, must constantly monitor triagers’ work. First, since the triage team is the first point of contact with the researchers’ community, they are the “front” of the company, and some managers want the triagers to positively represent their company in the hackers’ community. R4 mentioned how they had to “teach” the vendors’ triage team to treat researchers more respectfully. A researcher who has experienced a poor triage experience will not be inclined to look for more bugs for this company. Some clients have exposed complex products to hackers, requiring their team to train triagers to understand the product and better identify relevant bugs. Finally, the specific needs that the clients have must be also clearly and constantly communicated to the triagers’ team to customize their treatment of bugs – for example, to receive any “customer questions and complaints” as well as bugs – usually after “a long process training” of the triage team for R4.
Finding the internal owner of the compromised asset
Once the bug has passed through triage and has been identified as a valid report, fixing it requires the manager to create another sub-project: they must coordinate with the internal owner of the product affected by the bug, who must fix it. In small companies, the security team might handle both the program and the remediation of the bug, but there is frequently a division of labor between the security team and the developer/production team. The former allocates the report and monitors its remediation, while the latter fixes the vulnerability. Once again, this process is not always straightforward.
Upon reception, some product owners might deny the severity of the report altogether. R12 recollects how they once reported to a staff member an internal document that was not protected (hence accessible by uncredentialed staff). The answer they received was that it did not matter, since “people don’t have the link” – leading the manager to reflect that “sometimes, internal communication is more complex than communication with a researcher.” Others will try to dodge the responsibility to fix the bug, as R3 reports: “If you’re a software engineer and the bug is assigned to you, you’ll be like: ‘No, no, no, no, that’s not me. But I know who that person is!’”
Product owners might also feel accused of doing their job wrong, which is a feeling the program managers have to defuse: “It’s a very awkward conversation,” recollects R2. Vulnerability is usually not designed on purpose. “Nobody wakes up one day and says, I’m going to write terrible code,” as R3 put it, and more plausible causes are that “they might not know how to write secure code, or they might be in a hot rush because some product manager might be pushing them.” R2 reveals that some product owners develop a strong emotional attachment to their product, which will result in pushback after the manager has “just told them they have an ugly baby” i.e., that their product is faulty:
They can spend an entire career on one set of products. They’re very attached to that. […] Then you have somebody from the outside who comes in and says: “Oh, by the way, your product is all messed up and how dare you put this product out there in the wild.” So, there is a natural conflict that happens there. (R2)
After the negotiation between the manager and the product owner to accept the bug, managers must monitor their progress in fixing it. The product team might reluctantly respond to such a request, as R3 put it: “Sometimes they ignore it. Sometimes they don’t look at it. Sometimes they don’t have time.” Managers use Service Level Agreements (SLAs) and their specific timelines to pressure the development team by “holding any SLAs over their head, making sure that they’re moving forward.” This can also be escalated to increase pressure on the internal team, as R3 sometimes does: “Break it to the board. Here’s the risk that is coming from older bugs that are not being fixed. Create pressure from the top and create responsibility.”
Some companies spend a considerable amount of resources to inform and train their staff to fix bugs early and quickly, as R7 recollects: “They do now understand what we are talking about, they do understand the SLA and that they need to fix that.” Other companies have been less successful. The best R1 could get was a compromise over a 90-day response: “Even that had to be a bit of a battle with our developers internally. Seriously? […] 90 days is a long time.” The team has since agreed to “shorten it down but even then, once in a while, a vulnerability stretches a bit beyond for whatever reason” – an “unfortunate” result according to the manager.
Negotiating the disclosure schedule
After fixing the bug, there can still be tensions around the timing of the disclosure of the bug. Researchers are usually independent workers in a reputation-based market: they want to quickly promote their skills to the community by describing how they found a specific bug (online or at key industry events). This desire to publicize their finding clashes with the desire of companies to keep control over the communication and timing of public disclosure.
Companies try to work out a timeline with researchers to make sure they have time to fix the bug internally, then issue a patch to their clients and leave them time to implement it, before they (or the researcher) publicly disclose the bug, as R2 put it:
If we disclose to the public, we’re actually zero-dating our own customers, which is not a good thing to do. We give them pre-notice and say, okay, this is going to come out in 90 days, here is our suggested security implementation […] They then will do whatever work they need to. If they need to hold for whatever reason then we’ll go back to the security researcher and say, this is what’s going on, we’re going to be delayed for two weeks. At that point, once the 90 days are up, we disclose the vulnerability publicly.
Managers mention hackers frequently using such a threat of disclosing: R4 recollects that “many times we receive a submission that starts with: ‘I intend to disclose this within 90 days, here’s the issue.’” Hackers might have experienced past poor treatment from a client or the vendor’s triager and might be keen on getting their attention via a preliminary show of force. It can also be a way of guaranteeing a speedy treatment of their report.
In practice, the disclosure of a bug usually happens via negotiation between the client and the hacker. Program managers have experience and procedures to deal with such a threat and will usually engage with researchers to control the narrative around the bug. R4 mentions:
If they are going to disclose, we have playbooks for responsible disclosure where we ask them: “Hey, can you share the blog with us? Can we have a say in the blog? Can we change this? Can you work with us here?”
Managers’ perception of articulation work
Scholarship on platform labor tends to emphasize how labor management via platforms generates hardship for workers. Adopting a symmetrical point of view and looking at the managers of the programs, we look here at their perception of how the platform shapes their work. Managers appreciate the fact that the minimal involvement of the platform allows them to interact directly with the hacker’s community which is, for many, an important reason to have a bug bounty program. They acknowledge, nonetheless, that the model generates adversarial interactions with hackers due to competing interests and timing. They do not, however, want more involvement from the platform, except in rare cases where mediation is needed.
For many managers, attending events, hosting receptions, talking to highly skilled hackers, and learning about new techniques are all aspects of bounty hunting that they value. R11 remembers the positive outcome of attending a hacker’s event: “it was awesome, to meet the hackers in person, to have a chat with them, see the real person behind the username, shake hands, have some fun together, and collaborate during the live hacking event.” R6 mentions the importance of “putting names to faces” via informal meetups at conferences:
we’re going to a conference, let’s take everyone out to dinner or happy hour or something to put names to faces. And [to] recognize people as humans rather than just a name on the other side of the screen somewhere around the world.
Others like R7 have deeply integrated the organization of hacking event as part of their bug bounty management:
From two years ago to now, we spent a lot of money going to different countries and organizing hacking events. And getting there with them to try to help them to their vulnerability or try to answer their questions in a close way. That’s very important to be close with them. Not just in terms of playing, but in terms of people skills.
Managers acknowledge, however, that not all interactions with hackers go well. The platform brings together two actors with opposed goals: hackers want money and timely processing for their reports, and managers must enforce a scope of accepted bugs, a reward scale, and a timeline for remediation. These two interests can clash. As R2 put it: “Sometimes hackers behave badly. We had an instance where somebody […] was unhappy with the results of their submission, and they started basically harassing and stalking [the employee].” In such cases, the vendor can act as a mediator in the conflict. This is important to managers such as R2 to “make sure that there’s a code of conduct, make sure that there’s an escalation policy.” The effect is usually positive: “Sometimes having that objective third party is enough to calm the situation down.”
One step further, dealing with hackers can have strong implications on managers’ mental health. As R2 carries on: “There is a high burnout rate because everyone’s mad at you all the time no matter what you do.” Their company has implemented a surprising measure against this:
There’s a company out there called [ – ] and they do conflict resolution training. It’s funny because the person who created it was an FBI hostage negotiator. All of our team goes through that training. And it’s basically to help you figure out how you do not internalize when people are yelling at you all day long.
This minimal engagement of vendors, leaving the managers the work to animate the program, directly echoes the model of platform as an empty shell. However, despite the tensions this model can generate, managers do not ask for greater platform involvement. As R5 put it, the direct link to the researchers is the lasting bond they want to create, and it is stronger without the mediation of the platform:
I think for our organization, we want to have that direct line with the researchers. We want our brand to be associated to it, not necessarily the vendor […] Because vendor relationships change. And we want that loyalty towards us, not other places.
While managers acknowledge the difficulties with several actors and tasks, they do not want the vendor to be more present in their relationship with hackers.
Conclusion and future research
Imagine if the user of a ride-sharing application, to obtain their ride, had to first get in touch with the drivers’ community, convince them to come to their location, and proactively maintain good rapport to keep getting rides? While this sounds unconceivable in many sectors, it is exactly the work bug bounty vendors ask their clients to perform to get hackers reporting bugs to their programs. While our sample of respondents is relatively small, our analysis of the manager’s side of a such programs complements existing scholarship on bug bounty, which overwhelmingly focuses on hackers. Drawing upon the concept of platforms as an “empty shell”, we reveal that managers must engage in four types of articulation work to make their program a success. Each of the important stages of a typical bug bounty pipeline (1. obtaining bug reports, 2. reviewing the report, 3. fixing the bug, and 4. disclosing it) generates four sub-projects that are necessary for the success of the program (respectively, to 1. to attract quality hackers, 2. to manage the noise, 3. to find the asset owner, and 4. to negotiate the disclosure schedule with hackers), cf. Figure 1. Managers coordinate all these sub-projects and align them toward the arc of the project. Furthermore, while this study included respondents with a public and / or a private program, further research uncovering the reasons why managers choose one versus the other (or both), and what effects this choice has on the articulation work with the other actors involved, could extend the results from this article.
The specific case of bug bounty platform enriches the analysis of digital platforms by highlighting a set of new tensions, summarized in Table 1.
▲ Table 1. Summary of current and future research.
First, the article shows that focusing only on the labor side (in this case, the hackers) misses half of the consequences this model has on platform work, and it makes a case to symmetrically look at consequences on the demand side. Second, it expands existing calls for the analysis of how the platform model shapes labor via the absence of constraints, which is as powerful as its constraints to shape labor. Third, while resistance to platforms’ constraints can be important and generative, some actors endorse the “empty shell” that the platform provides. Fourth, while platform scholars aim to design a governance model adapted to the agency of platforms, some actors are satisfied with the minimal involvement of platform owners. Finally, the concept of articulation work and CSCW writ large are useful perspectives to critically study the consequences of the platform model on labor, and we hope more research will adopt this interdisciplinary perspective.
Jean-Christophe Plantin is an Associate Professor at the Department of Media and Communications at the London School of Economics and Political Science. His research investigates the increasingly infrastructural role that digital platforms play in society.
Louise Marie Hurel is a Research Fellow in the Cyber and Tech team at RUSI. Her research interests include incident response, cyber capacity building, cyber diplomacy and non-governmental actors’ engagement in cyber security.