METADATA last updated: 2026-03-03 RT file_name: _context-commentary_various-xprize.md category: various subcategory: xprize words: 1470 tokens: 2025 CONTENT ## Context The XPRIZE Rapid Covid Testing competition was a $5 million, seven-month global competition launched on July 28, 2020. Organized by the XPRIZE Foundation, it aimed to incentivize the development of fast, frequent, affordable, and scalable COVID-19 testing solutions. The competition grew out of the broader Open COVID Screen initiative that sought to accelerate diagnostic testing availability during the pandemic. The competition followed a staged process. Teams registered and submitted qualifying applications by the August 31, 2020 deadline. FloodLAMP Biotechnologies was among the 219 semi-finalist teams selected from 35 countries. Semi-finalists were sent blinded proficiency test kits containing contrived, non-infectious samples (synthetic SARS-CoV-2 RNA and inactivated viral particles in various matrices including saliva, nasal swab material, PBS, and water). Kits were shipped between September 24 and October 2, 2020, and teams had one week from receipt to analyze the samples and upload results, which were scored on sensitivity, specificity, and limit of detection. Finalists were originally scheduled to be announced October 23, 2020, with clinical validation at two independent laboratories to follow. An Open Innovation Track ran in parallel, with finalist announcements in December 2020 and winners in February 2021. The competition offered $1 million to each of up to five winning teams, divided into installments tied to competition performance and successful deployment at test sites. OpenTrons partnered with XPRIZE to provide liquid handling robots to support teams during the pilot phase. However, significant delays in the preparation and shipment of the proficiency test panels pushed the competition's overall timeline roughly three to six months behind its original schedule. These delays affected all participants and compounded the broader operational challenges of conducting laboratory work during the pandemic. FloodLAMP's XPRIZE submission was built around the Rabe Cepko RT-LAMP assay protocol from Harvard Medical School, which used chemical inactivation (TCEP/EDTA), nucleic acid purification with ultra-cheap bulk silica ("glass milk"), and isothermal LAMP amplification with colorimetric readout. The approach targeted the ORF1a and N genes using saliva and nasal swab samples, and was designed for pooled screening of large populations at very low per-sample cost (estimated $0.46 per sample with pooling at 10). FloodLAMP's submission emphasized open-source distribution and enabling other basic labs to adopt the screening protocol, rather than building proprietary closed systems. The complete qualifying submission, with parts covering contact/design, results, capacity/scalability, innovation, targets, reagent costs, equipment, and presentation slides, is documented in the archive files. FloodLAMP submitted proficiency test results but did not advance to the finalist round. Twenty teams were selected as finalists in December 2020, and five grand prize winners were announced in March 2021: Reliable-LFC (antigen testing), ChromaCode, Mirimus, La Jolla Institute for Immunology, and Alveo Technologies (all RNA testing). FloodLAMP's proficiency test results (`XPrize FloodLAMP Proficiency Test Results.md`) show reasonable performance on the Zepto particle rack (51 of 69 correct, zero false positives, with false negatives concentrated near the limit of detection) but poor sensitivity on the Twist Synthetic RNA rack (13 of 56 SARS-CoV-2 positives detected), where the water-based sample matrix was incompatible with FloodLAMP's TCEP/NaI silica purification protocol. This buffer incompatibility was a concern FloodLAMP had flagged in their original submission. This subcategory contains files documenting the XPRIZE competition from FloodLAMP's perspective: the multi-part qualifying submission, presentation slides, proficiency test results, legal agreements (competitor agreement, team member release/waiver, registration fee certificate), competition communications (guidelines, PR toolkit, field notes toolkit), proficiency test kit documentation (FAQ, handling instructions), the list of 219 semi-finalist teams, and a separate AI-generated analysis exploring the pre-competitive vs. open-source question (`_AI_gLAMP and XPrize - pre-competitive vs open-source.md`). The gLAMP subcategory under various/ contains related material on the Global LAMP Consortium that provides additional context for the open collaboration themes discussed in the commentary below. ## Commentary FloodLAMP entered the XPRIZE Rapid Covid Testing competition motivated by the same goals that drove the company's founding: scaling affordable, accessible molecular testing during a pandemic. In hindsight, however, the prize competition model may not have been the most effective use of the resources and goodwill available at the time. The XPRIZE grew out of the Open COVID Screen initiative, which initially attracted participants with a vision of collaborative, open development of diagnostic tools. Many groups, including FloodLAMP, joined because of that open ethos. As the competition formalized, the dynamic shifted toward a more closed and competitive model. This was reflected in the competition's legal framework: early versions of the participation requirements included open protocol disclosure, but that requirement was later removed. Some teams did publish protocols through platforms like protocols.io, but the overall trajectory moved away from open collaboration and toward proprietary competition. The analysis in `_AI_gLAMP and XPrize - pre-competitive vs open-source.md` in this subcategory explores this tension in detail, comparing the pre-competitive framework used by the gLAMP consortium with what a fully open-source alternative could have looked like. We believe the prize money and the broader resources channeled through initiatives like the XPRIZE could have been more productively directed toward a coordinated open-source effort. The private market incentives for diagnostic development during a pandemic were already powerful — adding prize competition on top may not have accelerated progress, and the competitive framing may have been counterproductive. It attracted groups that would have contributed to an open commons and instead channeled their energy into closed, competitive tracks. The subsequent collapse of the diagnostic testing market, with many innovative companies going bankrupt once pandemic demand evaporated, provides empirical evidence that the private market alone was not a sustainable vehicle for pandemic testing infrastructure. FloodLAMP's own XPRIZE submission used a silica-based purification protocol (the "glass milk" approach from the Rabe Cepko assay) that, while functional, was not the protocol the company would ultimately prioritize. The silica protocol required a multi-channel pipette and a plate centrifuge, making it finicky and poorly suited for the low-resource, instrument-minimal settings FloodLAMP was targeting. Labs that already had this equipment could generally afford commercial purification columns, limiting the practical value of the ultra-cheap silica approach for its intended audience. FloodLAMP later moved toward a direct LAMP approach (no RNA extraction) that better aligned with its mission of ultra-accessible screening. The early glass milk protocol is not included in the archive for this reason. FloodLAMP submitted proficiency test results after running the plates in a final-night-run effort but did not advance to the finalist round. The results, now included in the archive with the answer key, show the assay performed reasonably on the rack containing inactivated viral particles in biological matrices (PBS, saliva, nasal) at 4°C — 51 of 69 correct calls, zero false positives, with an effective limit of detection around 2-5 copies/uL. However, the assay failed almost entirely on the rack containing Twist Synthetic RNA in water stored at -80°C, detecting only 13 of 56 positive samples. This was precisely the buffer incompatibility concern FloodLAMP had raised in their submission: the silica-based nucleic acid binding protocol depended on the TCEP inactivation chemistry and did not work with water-based samples that bypassed that step. Specificity was excellent across both racks — zero false positives and zero cross-reactivity calls against 15 other respiratory viruses. These results represent the early glass milk purification version of the FloodLAMP assay. Shortly after the XPRIZE, at the end of 2020, FloodLAMP switched to a direct LAMP protocol (no RNA extraction) with dry swab collection and a combined elution/inactivation step. The analytical performance of that version is best understood from the March 2021 FDA submissions (see file: `regulatory/fl-fda-submissions/2021-03-22_EUA Submission - FloodLAMP QuickColor COVID-19 Test v1.0.md`), and the real-world performance is documented in the pilots data, which showed significantly higher sensitivity than rapid antigen tests, particularly for early and asymptomatic cases (see "Comparison with Antigen Tests" in `pilots/pilot-data/_context-commentary_pilots-pilot-data.md`). The competition experienced substantial delays. Preparation and shipment of the blinded proficiency test panels put the timeline approximately three to six months behind the original schedule. This was not a failing unique to any single participant or to the XPRIZE organizers. It reflected the broader reality of operating during the pandemic: supply chain disruptions, personnel illness, and the personal toll of lockdowns and family care responsibilities made timelines unreliable across the board. The deeper question raised by the XPRIZE experience is whether prize competitions are an appropriate mechanism for pandemic diagnostics. The structural economics of pandemic testing — spiky, unpredictable demand at price points incompatible with profit margins, on timelines misaligned with regulatory and product development cycles — suggest that testing infrastructure may be better treated as a public good. The resources spent on incentive prizes and fragmented competitive efforts could have funded a coordinated open-source portfolio of testing protocols, maintained collaboratively, available to deploy immediately at the onset of the next outbreak. This was the argument FloodLAMP made to the gLAMP community during the pandemic, and it remains the assessment in retrospect. The related materials in the gLAMP subcategory (under various/) provide additional context for this perspective.