METADATA last updated: 2026-03-10_105744 file_name: _archive-combined-files_papers_14k.md category: various subcategory: papers gfile_url: **FLAGGED - TBD user-facing Google-hosted public file URL** words: tokens: CONTENT # _archive-combined-files_papers_14k (4 files, 14,115 tokens) # 9,056 _AI_Literature Review - Pandemic Testing and Screening Strategy.md METADATA last updated: 2026-02-23 AI file_name: _AI_Literature Review - Pandemic Testing and Screening Strategy.md file_date: 2026-02-23 title: FloodLAMP Literature Review - Pandemic Testing and Screening Strategy category: various subcategory: papers tags: source_file_type: md xfile_type: NA gfile_url: https://docs.google.com/document/d/13sYNGv-OnEOllnFn9B8TTwMrM7wjvXS91sCjWtqUv1w xfile_github_download_url: https://raw.githubusercontent.com/FocusOnFoundationsNonprofit/floodlamp-archive/main/various/papers/Literature%20Review%20-%20Pandemic%20Testing%20and%20Screening%20Strategy.md pdf_gdrive_url: NA pdf_github_url: NA conversion_input_file_type: NA conversion: NA license: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/ tokens: 9056 words: 5795 notes: Created by ChatGPT 5.2 Pro during archive preparation. **NOT HUMAN VERIFIED - MAY CONTAIN ERRORS** AI-generated annotated literature review of the 20 most important papers on pandemic testing and screening as a public-health intervention (surveillance testing, asymptomatic screening, test-and-trace timing, population programs, reporting systems, and preparedness). No source files were included in context; the response was generated from the model's training data. summary_short: Literature Review - Pandemic Testing and Screening Strategy provides an annotated survey of 20 foundational papers on COVID-19 testing and screening as a public-health intervention, organized by theme (frequency/turnaround strategy, contact tracing timing, asymptomatic screening rationale, population-level programs, pooled testing, self-testing, wastewater surveillance, and preparedness infrastructure), with cross-cutting takeaways and a supplementary reading list. CONTENT ## 1. Prompt (Verbatim) Okay, your task is to do a comprehensive and general literature review and source the 20 most Important scientific papers for the field of COVID testing. And I especially want you to include any kind of retrospective, or, I mean, really just up to date analyses that have happened since the pandemic. It's important to specify that I'm not interested in papers on important tests. I'm interested in papers on pandemic testing and screening kind of in general and using testing for pandemic preparedness and response. **Files included in context window:** - None (generated from model training data in an external ChatGPT session) ## 1. Prompt (Cleaned) Provide an annotated literature review of the most important papers on pandemic testing and screening as a public-health intervention -- covering surveillance testing, asymptomatic screening, test-and-trace timing, population programs, reporting systems, and preparedness -- rather than papers focused on which assay is best. **Files included in context window:** - None (generated from model training data in an external ChatGPT session) ## 1. AI Response (ChatGPT 5.2 Pro) Below is a field-oriented literature review focused on pandemic testing and screening as a public-health intervention (surveillance testing, asymptomatic screening, test-and-trace timing, population programs, reporting systems, and preparedness) -- not "which assay is best" papers. I'm treating "most important" as papers that (a) introduced widely-used conceptual frameworks, (b) shaped real-world testing policies/programs, (c) provided generalizable empirical evaluations, and/or (d) are major post-pandemic syntheses on how to build sustainable testing capacity for future emergencies. ### How the Pandemic-Testing Literature Clusters 1) Testing is an intervention, not just a measurement A key shift in 2020-2021 was moving from "highest analytical sensitivity wins" to "how fast and how often can you test, and what happens after the result?" This produced a strategy literature that treated testing + isolation/quarantine behavior as the core mechanism of transmission reduction. 2) Time-to-result and frequency dominate in screening settings For screening (especially asymptomatic), slow turnaround can erase the benefit of a more sensitive test. This underpins most campus/workplace/school screening designs. 3) Testing + action + adherence is the real system Nearly every successful program couples testing with: fast reporting, supportive isolation, clear incentives, and operational logistics (collection, lab throughput, staffing, compliance). 4) Scale is a first-class scientific problem During surges, the binding constraint is capacity: throughput, supply chains, staffing, and sustainable manufacturing/distribution. That spawned: - pooling/group testing methods when prevalence is low, - program "case studies" for large populations, - and post-pandemic analyses on how to avoid repeating 2020's bottlenecks. 5) Surveillance expands beyond clinical testing Wastewater and other population signals became crucial when individual testing was limited or underreported (especially once at-home testing became common). ### The 20 Most Important Papers for Pandemic Testing and Screening (Annotated) #### A. Foundational Strategy: Frequency, Turnaround, and Screening as Control 1. Mina MJ, Parker R, Larremore DB (2020). "Rethinking Covid-19 Test Sensitivity -- A Strategy for Containment." New England Journal of Medicine. A seminal argument that for public-health screening, speed and frequency can matter more than analytical sensitivity, reframing rapid tests as tools to reduce infectious contacts rather than to make definitive diagnoses. 2. Larremore DB, Wilder B, Lester E, et al. (2021). "Test sensitivity is secondary to frequency and turnaround time for COVID-19 screening." Science Advances. A highly influential modeling paper formalizing the "frequency + turnaround > sensitivity" principle, widely cited in debates over antigen screening and surveillance program design. 3. Paltiel AD, Zheng A, Walensky RP (2020). "Assessment of SARS-CoV-2 Screening Strategies to Permit the Safe Reopening of College Campuses in the United States." JAMA Network Open. One of the most policy-relevant early models of routine screening programs, translating assumptions (test cadence, delays, compliance) into reopening feasibility and costs. 4. Paltiel AD, Walensky RP (2021). "Clinical and Economic Effects of Widespread Rapid Testing to Decrease SARS-CoV-2 Transmission." Annals of Internal Medicine. A national-scale model for home-based rapid antigen screening, explicitly linking testing frequency, behavioral adherence, and cost-effectiveness -- helpful for thinking about "testing as a standing capability" for future waves. #### B. Testing + Tracing + Timing: Why Delays Break Containment 5. Kretzschmar ME, Rozhnova G, Bootsma MCJ, et al. (2020). "Impact of delays on effectiveness of contact tracing strategies for COVID-19: a modelling study." The Lancet Public Health. A core result for operational preparedness: delays in testing and tracing sharply reduce impact, motivating investments in fast turnaround and streamlined tracing workflows. 6. Hellewell J, Abbott S, Gimma A, et al. (2020). "Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts." The Lancet Global Health. Early, widely cited work on when outbreak control is feasible with isolation/contact tracing -- implicitly setting requirements for case detection speed and completeness. #### C. Why Asymptomatic Screening Matters: Silent Infection + Viral Dynamics 7. Sah P, Fitzpatrick MC, Zimmer CF, et al. (2021). "Asymptomatic SARS-CoV-2 infection: A systematic review and meta-analysis." PNAS. A major synthesis establishing that a substantial fraction of infections can be asymptomatic, strengthening the rationale for screening beyond symptom-based testing in many settings. 8. Kissler SM, Fauver JR, Mack C, et al. (2021). "Viral dynamics of acute SARS-CoV-2 infection and applications to diagnostic and public health strategies." PLOS Biology. Connects within-person viral trajectories to testing strategy: informs optimal screening intervals and why rapid detection around peak infectiousness is critical. #### D. Quarantine "Test-to-Release": Testing as a Way to Reduce Societal Costs 9. Wells CR, Townsend JP, Pandey A, et al. (2021). "Optimal COVID-19 quarantine and testing strategies." Nature Communications. A general framework showing how testing on exit (and/or entry+exit) can shorten quarantine while maintaining risk control -- key for travel, critical infrastructure staffing, and future pandemic playbooks. #### E. Population-Level Surveillance Beyond Clinics: Wastewater as "Community Testing" 10. Peccia J, Zulli A, Brackney DE, et al. (2020). "Measurement of SARS-CoV-2 RNA in wastewater tracks community infection dynamics." Nature Biotechnology. One of the most influential demonstrations that wastewater can track community dynamics -- foundational for early warning systems when clinical testing is limited or biased. #### F. Real-World Population Screening Interventions: What Happened at Scale 11. Pavelka M, Van-Zandvoort K, Abbott S, et al. (2021). "The effectiveness of population-wide, rapid antigen test based screening in reducing SARS-CoV-2 infection prevalence in Slovakia." Science. A landmark national case: population-wide rapid testing paired with isolation/quarantine policies, widely discussed as evidence (and a stress test) for mass screening feasibility and impact. 12. Zhang X, et al. (2022). "Impact of community asymptomatic rapid antigen testing on COVID-19 related hospital admissions: synthetic control study." BMJ. A major quasi-experimental evaluation (synthetic control) of a city-wide program (Liverpool) connecting asymptomatic testing with downstream outcomes (hospital admissions). #### G. Institutional Screening Programs: Universities as "Mini Public-Health Systems" 13. Ranoa DRE, Holland RL, Alnaji FG, et al. (2022). "Mitigation of SARS-CoV-2 transmission at a large public university." Nature Communications. A detailed "stack" case study (testing + modeling + digital tools + NPIs) showing how fast, frequent surveillance testing can be operationalized at population scale with measurable transmission mitigation. #### H. Schools and "Test-to-Stay": Replacing Blanket Quarantine with Serial Testing 14. Young BC, Eyre DW, Kendrick S, et al. (2021). "Daily testing for contacts of individuals with SARS-CoV-2 infection and attendance and SARS-CoV-2 transmission in English secondary schools and colleges: an open-label, cluster-randomised trial." The Lancet. A pivotal trial supporting daily rapid testing as an alternative to blanket isolation for contacts -- central to keeping schools open while managing transmission risk. #### I. Scaling Capacity During Shortages: Pooling/Group-Testing as Pandemic Infrastructure 15. Shental N, Levy S, Wuvshet V, et al. (2020). "Efficient high-throughput SARS-CoV-2 testing to detect asymptomatic carriers." Science Advances. One of the best-known pooled testing frameworks (P-BEST), showing large efficiency gains -- important for low-prevalence screening and surge resilience. 16. Barak N, Ben-Ami R, Sido T, et al. (2021). "Lessons from applied large-scale pooling of 133,816 SARS-CoV-2 RT-PCR tests." Science Translational Medicine. A rare and valuable "operational science" paper: what happens when pooling is deployed at scale -- error modes, logistics, and when pooling pays off. #### J. The Post-2021 Reality: At-Home/Self-Testing and the Surveillance Data Gap 17. Herbert C, et al. (2022). "Use of a Digital Assistant to Report COVID-19 Rapid Antigen Self-test Results to Health Departments." JAMA Network Open. Addresses a central preparedness problem in the self-testing era: how (and whether) at-home results get reported to public health systems, and how incentives affect reporting. 18. Anand A, et al. (2024). "Performance and operational characteristics of COVID-19 self-tests: a living systematic review." PLOS Global Public Health. A key post-pandemic evidence synthesis focused specifically on self-testing, including operational issues (not just accuracy), which is crucial for future "distributed testing" strategies. #### K. Retrospective and Preparedness-Focused Analyses: Sustaining Capacity Between Crises 19. Santos S, et al. (2025). "The SARS-CoV-2 test scale-up in the USA: an analysis of the number of tests produced and used over time and their modelled impact on the COVID-19 pandemic." The Lancet Public Health. A retrospective systems analysis of production, use, and modeled impact of the US testing scale-up -- directly relevant to preparedness planning (manufacturing, distribution, and timing). 20. Krylov SN, Krylova SM, Le ATH, et al. (2025). "A policy roadmap for sustainable mass-testing." Health Affairs Scholar. A forward-looking blueprint arguing that mass testing needs durable manufacturing/distribution capacity that persists between surges -- explicitly framed as pandemic preparedness infrastructure, not an ad hoc emergency tool. ### Cross-Cutting Takeaways - **Turnaround time is a control parameter.** The benefit of a test is bounded by how quickly results trigger isolation/quarantine actions (Larremore; Kretzschmar; Wells). - **Screening programs are socio-technical systems.** Adherence, accessibility, and reporting pathways can dominate theoretical performance (Paltiel 2021; Herbert 2022; Krylov 2025). - **Mass testing can work, but it's not "just test kits."** Population programs depend on rapid isolation support, communication, and governance (Slovakia; Liverpool; UIUC). - **Sustainable preparedness means keeping capacity warm.** The retrospective US scale-up analysis and the mass-testing roadmap converge on the same point: you can't improvise manufacturing + logistics during week 3 of a novel outbreak (Santos; Krylov). - **Surveillance must diversify.** Wastewater proved useful as a population signal when clinical testing was constrained or behaviorally biased (Peccia). ### Optional "Next Layer" Reading (Highly Relevant but Outside the Top 20 List) If you want to expand beyond 20, these fill common gaps: - US diagnostic scale-up program design (early pandemic): Tromberg et al. on the NIH RADx initiative. - Regulatory/implementation lessons for molecular diagnostics: Shuren & Stenzel (NEJM perspective). - Real-world pooled testing at even larger scale (blueprint framing): Zismanov et al. (2024) on clinical pooled testing deployment. - Effectiveness of rapid antigen screening for asymptomatic surveillance (review): Walsh et al. (2022). - Travel/border testing strategy modeling: Kiang et al. (routine asymptomatic testing for airline travel). ### Setting-Specific Note If you tell me the setting(s) you care about most (e.g., schools, workplaces, hospitals, travel, community distribution, wastewater/sentinel surveillance), I can reorganize the same evidence into a tighter, setting-specific review framework (still centered on pandemic testing/screening strategy rather than assay comparisons). ## 2. Prompt (Verbatim) Okay, what I want you to do next is to take the last five bullet points for the optional next reading, next layer reading, and summarize those. Well, for each of them, give a summary, a short bullet point summary of the key points of the paper, and then comment on that paper with respect to the questions around testing and pandemic preparedness and response. And what I'm getting at here is I want to know how, you know, these authors or the these papers answer the questions of how effective or how important is testing and screening as a tool for pandemic, you know, initial stage response to suppress the beginning and spread of a pandemic. And then how prepared and ready are the major stakeholders in implementing that if another pandemic started. And related to this is how they answer the question of how effectively did we use testing for the COVID pandemic and what were the problems with that. ## 2. AI Response (ChatGPT 5.2 Pro) - 5 Optional Next-Layer Papers Below are the five “optional next-layer” papers you asked about—each with (1) a short bullet summary of the paper’s key points and (2) comments specifically tied to (a) how important/effective testing & screening are for early suppression, (b) how ready stakeholders are to implement those capabilities next time, and (c) what COVID exposed about gaps and failure modes. ⸻ ### 1) Tromberg et al. (2020) — Rapid Scaling Up of Covid-19 Diagnostic Testing in the United States — The NIH RADx Initiative (N Engl J Med) Key points from the paper • Problem framing: As of July 2020, U.S. daily diagnostic testing capacity was roughly 520k–823k tests/day, while some models projected needs from ~900k/day up to 5 million/day (June) and even 20 million/day (July)—highlighting a major mismatch between demand and capacity.  • RADx program launch & mandate: After Congress appropriated $1.5B to NIH for SARS‑CoV‑2 testing, NIH launched RADx within 5 days to support development, scale-up, and deployment of accurate, rapid tests.  • Quantified scale goal: One explicit aim was to expand capacity so that by December 2020 ~2% of the U.S. population (~6 million people) could be tested per day.  • Program structure: RADx is described as four linked components: • RADx-tech (innovation funnel + validation support), • RADx-ATP (scale-up of more mature platforms, including “mega-labs”), • RADx-rad (nontraditional approaches, incl. home testing ideas / alternative modalities), and • RADx-UP (community-engaged implementation to improve access in underserved populations).  • Key bottlenecks named explicitly: • access to clinical samples for rapid validation, • digital connectivity/interoperability between results, EHRs, and public health reporting, • manufacturing/supply chain constraints (swabs, reagents, equipment, staffing), and • distribution/implementation into the right venues/regions with adequate uptake.  How it speaks to preparedness and response questions • Early-stage suppression / importance of testing: The paper treats rapid, large-scale testing capacity as core enabling infrastructure (for individual diagnosis and population-level surveillance), and it’s written from the premise that without scale, speed, and access you can’t execute early containment strategies effectively.  • Stakeholder readiness for “next time”: RADx is basically an “emergency industrial policy” for diagnostics—suggesting readiness was insufficient without a crash program. The authors point to readiness dependencies that have to be in place before the next event: specimen access for validation, interoperable reporting, and manufacturing/distribution capacity.  • How well COVID testing was used / what went wrong: The article is an implicit postmortem of a system that needed massive, late scale-up: capacity in mid-2020 was far below what many models suggested; major operational constraints (supplies, manufacturing, reporting) were central problems.  • Equity as preparedness: A notable piece of “preparedness” here is that the program bakes in underserved-populations implementation and the social/behavioral drivers of testing uptake—explicitly framing access and acceptance as key determinants of whether testing actually works in the real world.  ⸻ ### 2) Shuren & Stenzel (2020) — Covid-19 Molecular Diagnostic Testing — Lessons Learned (N Engl J Med, Perspective) Key points from the paper • EUA enabled speed, but at tradeoff cost: After the U.S. emergency declaration in early Feb 2020, FDA’s EUA pathway allowed tests to be deployed rapidly under a “may be effective” standard; FDA also created templates to streamline submissions and, by late July 2020, had authorized 163 COVID-19 diagnostic tests.  • Early validation approach was fast but imperfect: Initially, performance could be supported by in silico analyses and contrived (“spiked”) samples, which sped development but was less likely to fully characterize performance versus real patient specimens; later FDA shifted to requiring validation using known-status samples and harmonized approaches.  • Comparability infrastructure: FDA developed and distributed reference panels (different levels of inactivated virus + blinded unknown-status samples) to allow comparative assessment of limits of detection and to anchor performance once standards exist.  • Quality problems + confusion were real: They describe misunderstandings around EUA, pressures from unmet need, and that some laboratory-developed tests and commercial submissions had design/validation problems—highlighting the tension between speed and reliability.  • Clinical/public health interpretation matters: The authors emphasize predictive values and how false results scale in mass testing; they argue clinicians need clearer standardized performance information and training to interpret tests appropriately.  • Future-focused recommendations: They call for (i) pre-established international agreements and plans for rapid specimen sharing, (ii) prioritizing a small number of well-designed tests on common high-throughput platforms produced at scale (rather than “scores” of tests), (iii) proactive validation protocols before outbreaks, (iv) a common legislative framework to ensure clinical test accuracy, and (v) standardized performance communication + clinician education.  How it speaks to preparedness and response questions • Early-stage suppression / importance of testing: The perspective supports the idea that testing can be a decisive early tool, but only if performance is trustworthy and results are interpretable at scale. Their focus is: speed is necessary, but not sufficient—poorly characterized tests can undermine response.  • Stakeholder readiness: Their recommendations read like a checklist of what was missing: pre-negotiated specimen access, pre-built validation protocols, and a coherent plan to scale a smaller set of standardized assays/platforms. That implies readiness gaps were structural—not just “we needed more tests.”  • How well COVID testing was used / what problems emerged: They explicitly describe confusion about EUA, variable familiarity among developers/clinicians, and the risk of false results and test misuse—suggesting that part of COVID’s failure mode was interpretation + governance, not just technology.  • Key preparedness insight: The “small number of scalable, validated tests on common platforms” recommendation is basically a critique of fragmented, heterogeneous test proliferation that can strain supply chains, validation resources, and clinical understanding.  ⸻ ### 3) Zismanov et al. (2024) — High capacity clinical SARS‑CoV‑2 molecular testing using combinatorial pooling (Communications Medicine) Key points from the paper • Why this matters: Traditional two-stage pooled testing (Dorfman pooling) is typically practical only when positivity is low (the paper notes traditional pooling was used in practice mainly when positivity was <2%).  • Single-stage combinatorial pooling (P‑BEST): P‑BEST runs in one testing round by placing each sample into multiple pools + using a decoding algorithm; pooling designs are adapted as positivity changes.  • Clinical validation: They report validation with 3636 side-by-side tests; results include strong detection performance under their regulatory criteria and the ability to estimate Ct values (reported correlation r = 0.95 between measured and estimated Ct for positives).  • Real-world scale deployment: After Israeli Ministry of Health approval, P‑BEST was used to test 837,138 samples with 270,095 PCR reactions (about 3.1× efficiency / 68% reduction in tests), including during Alpha/Delta periods where positivity exceeded 10%, when traditional pooling becomes less practical.  • Cost & feasibility: They quantify reagent savings of > $10M and also describe a tablet-based manual pooling workflow for settings without liquid-handling robots (including pilots conducted in India).  • Preparedness claim: They position single-stage pooling as a proof-of-concept for continuous surveillance of multiple pathogens with reduced cost, and as a tool to increase efficiency during outbreaks.  How it speaks to preparedness and response questions • Early-stage suppression / importance of testing: This paper argues (with clinical deployment evidence) that testing efficiency is itself a suppression enabler: when demand surges or reagents are scarce, capacity multipliers like single-stage pooling can maintain surveillance and reduce turnaround delays—both critical early when containment is still possible.  • Stakeholder readiness: Their “worked in practice” story depended on: regulatory approval, lab operations capable of changing pooling designs with positivity, software tooling (Pooldi), and QA criteria (e.g., Ct thresholds). That suggests real readiness is not just “know pooling exists,” but having pre-approved protocols + trained lab workflows + software + governance already in place.  • How well COVID testing was used / problems: Their discussion is blunt that many places experienced slow PCR turnaround and shortages, and that pooling (especially advanced pooling) was not widely used to increase capacity and reduce costs—implying a missed operational opportunity in many jurisdictions.  • Practical preparedness angle: The tablet/manual option is a reminder that “preparedness” needs pathways for low-resource labs, not only automated mega-labs—otherwise capacity strategies don’t generalize globally.  ⸻ ### 4) Walsh et al. (2022) — Effectiveness of rapid antigen testing for screening of asymptomatic individuals to limit the transmission of SARS‑CoV‑2: a rapid review (Rev Med Virol) Key points from the paper • Scope: Rapid review of empirical evidence (not just diagnostic accuracy) on whether rapid antigen detection tests (RADTs) reduce transmission when used for asymptomatic screening / serial testing; search through 19 July 2021, yielding 16 studies.  • Overall conclusion: The review finds substantial uncertainty about effectiveness in limiting transmission, driven by inconsistent results, small number of studies, mostly observational/uncontrolled designs, and methodological concerns; notably, no included study was rated “good” quality.  • Surveillance gap: They report no studies on RADTs for surveillance of asymptomatic individuals (as of their search date).  • Causal attribution problem: For mass/population screening, ecological study designs make it hard to separate the effect of testing from concurrent restrictions; reported effects ranged from minimal change in one study to large reductions in another, underscoring uncertainty.  • Implementation is hard: They highlight that real-world programs can be resource-intensive and may need repeated testing to sustain effects; adherence and feasibility problems are salient (e.g., care home settings with poor adherence and implementation barriers).  • Equity risk: Uptake in some pilots was lower among certain groups (deprived areas, some ethnic minorities, young people, etc.), raising concern that poorly designed programs can exacerbate inequities.  • Policy caution: They explicitly say RADT screening should be an additional measure, not a replacement for known mitigation measures, given uncertainty.  How it speaks to preparedness and response questions • Early-stage suppression / importance of testing: This review does not argue “screening doesn’t work”; it argues that (as of mid‑2021 evidence) we can’t quantify effectiveness well in many real settings because evaluation designs were weak and confounded. For preparedness, that’s a major point: screening can’t be assumed to suppress spread without strong implementation + evidence generation.  • Stakeholder readiness: They show that readiness hinges on operational capacity (training, space, staffing, adherence systems) and public uptake—and these were not reliably present even in high-risk settings like care homes.  • How effectively COVID used screening / what problems: Two “COVID lessons” they surface are: 1. many programs rolled out without designs that could convincingly isolate the effect of testing from other policies, limiting learning; and 2. implementation barriers + uneven uptake can blunt impact and worsen inequalities.  • Preparedness implication: A next-pandemic strategy that relies on mass screening needs a built-in rapid evaluation apparatus (good study designs, economic evaluation, equity monitoring) so you can quickly learn what works rather than guessing.  ⸻ ### 5) Kiang et al. (2021) — Routine asymptomatic testing strategies for airline travel during the COVID-19 pandemic: a simulation study (Lancet Infect Dis) Key points from the paper • Model & outcomes: Microsimulation of 100,000 airline travelers, spanning 3 days pre-travel through 2 weeks post-arrival; main outcome is cumulative infectious days (population-level transmission risk), plus infectious travelers detected on travel day (passenger risk proxy).  • Baseline risk: With no testing/screening, the model estimated 8357 infectious days and 649 infectious passengers on day of travel.  • Pre-travel PCR (within 3 days): Reduced infectious days by 36% (to ~5401) and identified ~88% of actively infectious travelers on travel day.  • Same-day rapid antigen (modeled at 90% of PCR sensitivity during active infection): Reduced infectious days by 32% and identified ~86% of actively infectious travelers—similar to pre-travel PCR under their assumptions.  • Layering matters a lot: Adding a post-travel PCR + 5-day quarantine produced much larger reductions: • pre-travel PCR + post-travel PCR/quarantine → 82% reduction in infectious days, • same-day antigen + post-travel PCR/quarantine → 70% reduction.  • Implementation assumptions & sensitivity: Effects depend heavily on adherence/participation; they assume high participation and note this would require airline/government enforcement; low adherence reduces benefits.  How it speaks to preparedness and response questions • Early-stage suppression / importance of testing: This is one of the clearest “testing as control” arguments in the set: if you care about importation risk early in a pandemic, test-and-travel strategies can materially reduce infectious exposure associated with travel—especially when testing is paired with short quarantine and post-arrival testing.  • Stakeholder readiness: Their results implicitly say that readiness is about systems: verification, enforcement, capacity to test quickly, and support for quarantine adherence. Without those, “having tests” won’t deliver modeled benefits.  • How well COVID used testing / what problems show up: The paper highlights a classic COVID tension: pre-travel testing can reduce infectious travelers, but population-level transmission risk reduction is much stronger with layered strategies—and that requires coordination across agencies and private actors, plus compliance.  ⸻ ### Cross-paper synthesis on your three questions 1) How important/effective is testing & screening for early suppression? • These papers collectively support a “testing is necessary, but only works when operationalized” view: capacity scale-up (RADx), reliable validation and interpretability (FDA lessons), and deployment designs that block/interrupt transmission chains (travel strategies, pooling) are positioned as decisive early tools.  • At the same time, the Walsh review is a caution: real-world evidence for asymptomatic screening effectiveness (as of mid‑2021) was uncertain, largely because of weak study designs and confounding, and because implementation is difficult.  2) How ready are major stakeholders to implement this next time? • The readiness story is mostly about infrastructure and governance: specimen-sharing agreements, standardized validation protocols, interoperable reporting, supply chains/manufacturing surge capacity, and enforceable operational plans (e.g., travel verification).  • Several papers imply that in COVID, many of these pieces were built during the crisis (RADx; evolving EUA approach), rather than existing as standing capability.  3) How effectively did we use testing in COVID, and what were the problems? • Common “failure modes” highlighted include: capacity shortfalls and slow turnaround, validation/specimen bottlenecks, fragmented test ecosystem and inconsistent performance understanding, reporting/data interoperability gaps, and implementation/uptake inequities and adherence problems.  • A key lesson from Zismanov et al. is that even when a capacity-multiplier approach existed (advanced pooling), it wasn’t widely adopted in many places—suggesting a preparedness gap in translating methods into regulated clinical workflows.  ⸻ If you want, I can take the same five papers and produce a one-page “preparedness design memo” that extracts: (i) the capabilities each paper implies you need, (ii) which stakeholders own each capability (public health, regulators, labs, manufacturers, schools/employers, airports/airlines), and (iii) what “day 0–day 30” actions look like in a new outbreak. ## 3. Prompt (Verbatim) OK go ahead and do the following: take the same five papers and produce a one-page “preparedness design memo” that extracts: (i) the capabilities each paper implies you need, (ii) which stakeholders own each capability (public health, regulators, labs, manufacturers, schools/employers, airports/airlines), and (iii) what “day 0–day 30” actions look like in a new outbreak. ## 3. AI Response (ChatGPT 5.2 Pro) - Pandemic Testing & Screening Preparedness Design Memo ### Scope Day 0–Day 30 actions and standing capabilities for a new outbreak (respiratory-pathogen default) based on five cornerstone papers: • Tromberg et al. (NEJM, 2020) RADx scale-up program design • Shuren & Stenzel (NEJM, 2020) EUA / regulatory lessons for molecular diagnostics • Zismanov et al. (Communications Medicine, 2024) combinatorial pooling at national scale • Walsh et al. (Rev Med Virol, 2022) effectiveness evidence + implementation limits of asymptomatic rapid screening • Kiang et al. (Lancet ID, 2021) test-and-travel modeling + layering logic ### Strategic premise Testing and screening suppress early spread only when they are executed as a system: fast turnaround, clear interpretation, high uptake, reliable reporting, and immediate action pathways (isolation, contact management, travel controls). The first 30 days are dominated by speed-to-scale, not test perfection. ⸻ ### (i) Capabilities implied by each paper 1) Tromberg/RADx → “Surge innovation + manufacturing + deployment engine” Capabilities you need • A standing “RADx-like” accelerator that can rapidly evaluate, validate, and scale tests (not built ad hoc mid-crisis). • Manufacturing surge capacity for tests and critical consumables (swabs, reagents, plastics, extraction, PPE, etc.) with pre-negotiated contracts. • A national deployment and distribution playbook (where tests go first; how allocation adjusts with incidence). • Digital connectivity: test results flow into EHR/public health with minimal friction and consistent metadata. • Built-in equity implementation (community distribution, access barriers, and trust). 2) Shuren & Stenzel → “Regulatory readiness + performance clarity” Capabilities you need • Pre-written EUA templates + validation protocols for multiple likely test modalities. • Rapid access to clinical specimens and shared panels for validation (domestic + international agreements). • Reference standards/panels to compare tests and stabilize quality across many manufacturers. • Clear public communication of what a test result means (and does not mean) by use-case (diagnosis vs screening vs travel). • Governance that favors a small number of scalable, well-characterized platforms early—then expands. 3) Zismanov → “Capacity multipliers when supply chains and labs are stressed” Capabilities you need • Pre-approved pooling protocols, including combinatorial/one-stage pooling options that remain useful beyond very low positivity. • “Turnkey” software + QA (decoding algorithms, audit trails, Ct/quantification policies, error handling). • Multiple operational modes: high-automation labs and low-resource/manual pooling workflows. • Trigger-based logic for when to switch pooling designs as prevalence/positivity shifts. • Procurement and training plans so pooling can be turned on immediately, not as a research project. 4) Walsh → “Implementation science + evaluation built into operations” Capabilities you need • Screening programs designed with adherence, behavior, and feasibility as first-class constraints. • An embedded rapid evaluation apparatus (good study designs, causal inference plans, outcome tracking, cost/equity monitoring). • Equity safeguards: uptake monitoring, targeted outreach, and mitigation of access gaps. • Guidance that screening is typically additive, not a substitute for other controls, unless evidence supports substitution. • A learning system that improves fast: “deploy → measure → adapt” in weeks, not years. 5) Kiang → “Layered border/travel testing as risk management” Capabilities you need • Standardized test-to-travel protocols (pre-travel, same-day, post-travel) with clear rules for different risk levels. • Verification infrastructure (digital or paper) that airlines/airports can enforce consistently. • Layering playbooks: testing + short quarantine + post-arrival testing when importation risk matters. • Contingency for limited capacity: prioritize routes/regions; integrate with surveillance signals. • Coordination mechanisms between public health and private operators to ensure compliance and data flow. ⸻ ### (ii) Ownership model: who “owns” what (primary + key partners) Capability Primary owner Key partners (from your list) EUA-ready validation protocols, reference panels, performance communication Regulators Public health (standards), Labs (validation execution), Manufacturers (documentation) Specimen access agreements + sample logistics Public health Regulators (requirements), Labs (biobanking), Manufacturers (early test development) Surge manufacturing & consumables supply chain (warm-base contracts) Manufacturers Public health (demand signals), Regulators (quality), Labs (forecasting), Employers/schools (program demand) Lab capacity scaling (throughput + staffing + TAT targets) Labs Public health (prioritization), Regulators (flexibilities), Manufacturers (platform supply) Pooling playbooks + operationalization (including software + QA) Labs Regulators (pre-authorization), Public health (when/where to use), Manufacturers (compatible platforms) Reporting/interop pipeline (results to PH, consistent fields) Public health Labs (feeds), Regulators (requirements), Employers/schools (program reporting) Population screening programs (workplaces/schools) with adherence supports Schools/Employers Public health (guidance + evaluation), Labs (service), Regulators (test authorization), Manufacturers (supply) Travel testing + verification + enforcement Airports/Airlines Public health (rules), Regulators (test eligibility), Labs (testing delivery), Manufacturers (supply) Rapid-cycle evaluation (effectiveness, equity, unintended effects) Public health Schools/Employers, Airports/Airlines, Labs (data), Regulators (labeling/claims) Design note: The “owner” must have budget authority and operational control; partners supply standards, tools, capacity, and data. ⸻ ### (iii) Day 0–Day 30 actions in a new outbreak (a practical playbook) Day 0–3: Activate governance + lock the “rules of the road” • Public health: declare incident management structure; publish initial testing objectives by use-case: 1. clinical diagnosis for severe cases, 2. sentinel surveillance for situational awareness, 3. targeted screening in outbreak hotspots, 4. travel/border measures if importation is a key concern. • Regulators: release pre-built EUA templates + minimum validation requirements; publish a “platform shortlist” for rapid scale. • Labs: set initial turnaround targets (e.g., 24–48h maximum for priority clinical tests); begin staffing surge schedules. • Manufacturers: trigger warm-base contracts and “minimum viable” scale-up; start consumables surge. • Airports/Airlines: stand up verification pathways (even if interim/manual) and contingency comms. Day 4–7: Build early capacity while evidence is still uncertain • Stand up a RADx-like pipeline: fast triage of candidates + validation access (specimens/panels) + manufacturing feasibility check. • Data plumbing: enforce a minimum dataset for every test result (date/time collected, test type, setting, geography) into public health feeds. • Pooling readiness: authorize and pilot pooling in selected labs to multiply capacity if constraints appear (especially for surveillance/screening streams). • Pilot screening programs where they matter most: high-risk congregate settings and essential workforce nodes—with adherence supports. • Launch rapid evaluation: define outcomes now (uptake, turnaround, positivity, onward transmission proxies, equity metrics). Day 8–14: Shift from “testing availability” to “testing impact” • Public health: publish operational guidance for screening frequency/targeting (who, how often, what action follows a positive). • Labs: scale throughput; separate streams (clinical vs screening) to protect turnaround for clinical care. • Regulators: distribute reference panels; tighten claims as evidence accrues; harmonize performance labeling for the public. • Employers/Schools: expand screening only where the action chain is real (isolation support, paid leave, staffing backfill). • Airports/Airlines: implement layered protocols if warranted (pre-travel + post-travel; quarantine alternatives with testing). Day 15–30: Stabilize a sustainable operating model • Consolidate onto scalable platforms (Shuren & Stenzel logic): reduce fragmentation that strains supply chains and confuses users. • Scale capacity multipliers: broaden pooling or other throughput strategies if positivity/volume make them beneficial. • Convert pilots to policy: keep what shows measurable benefit and feasible uptake; stop what fails on adherence/equity/impact (Walsh logic). • Institutionalize reporting: make self-/rapid test reporting pathways usable and incentivized where needed (to preserve surveillance fidelity). • Publish a “30-day learning report”: what worked, what didn’t, where bottlenecks occurred, and what contracts/authorizations need adjustment. ⸻ Readiness indicators (quick “are we actually prepared?” checks) • Turnaround: % of priority results returned within target time (e.g., <24–48h). • Scale: tests/day capacity by stream (clinical vs screening vs travel). • Supply chain: days-of-supply for critical consumables; ability to surge manufacturing within 2–4 weeks. • Data: % of results reported with complete minimum dataset; lag from result to public health receipt. • Uptake/equity: participation rates by geography/demographics; barriers documented and mitigated. • Actionability: % of positives with successful isolation support linkage (or analogous action). If you want, I can compress this into a single RACI chart plus a one-graphic timeline (still text-only, but “board-ready”), or tailor the Day 0–30 playbook to a specific scenario (e.g., novel influenza vs SARS-like; high severity vs low severity; high asymptomatic fraction vs low). # 1,741 _citations_papers.md METADATA last updated: 2026-02-26 RT file_name: _citations_papers.md category: various subcategory: papers words: 577 tokens: 1741 CONTENT ## 2021-03-24_Cochrane Antigen Tests Review Link and Guardian Article.pdf [Dinnes J, Deeks JJ, Berhane S, et al.; Cochrane COVID-19 Diagnostic Test Accuracy Group. Rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. *Cochrane Database Syst Rev*. 2021;2021(3):CD013705.](https://pubmed.ncbi.nlm.nih.gov/33760236/) Publisher: [Cochrane Library](https://doi.org/10.1002/14651858.CD013705.pub2) ## CLIA HUB - How-To Guide.pdf [Lyden A, Crawford E, Ahyong V, et al. CLIAHUB Automated RNA Extraction & RT-PCR Protocol V2 (Companion How-To Document). *protocols.io*. 2020.](https://doi.org/10.17504/protocols.io.bfi2jkge) ## CLIA Hub Paper - Rapid deployment of SARS-CoV-2 testing_ The CLIAHUB (10-28-2020).pdf [Crawford ED, Acosta I, Ahyong V, et al. Rapid deployment of SARS-CoV-2 testing: The CLIAHUB. *PLoS Pathog*. 2020;16(10):e1008966.](https://pubmed.ncbi.nlm.nih.gov/33112933/) Publisher: [PLoS Pathogens](https://doi.org/10.1371/journal.ppat.1008966) ## Curative - Preprint - Post-Disease Divergence in Detection between swabs and saliva (1-26-2021).pdf [Turner F, Vandenberg A, Slepnev VI, et al. Post-disease divergence in SARS-CoV-2 RNA detection between nasopharyngeal, anterior nares and saliva/oral fluid specimens - Significant implications for policy & public health. *medRxiv*. 2021.](https://doi.org/10.1101/2021.01.26.21250523) ## Doudna Lab - Blueprint for Pop-Up SARS-CoV-2 Testing Lab (July 2020 Nature).pdf [IGI Testing Consortium. Blueprint for a pop-up SARS-CoV-2 testing lab. *Nat Biotechnol*. 2020;38(7):791-797.](https://pubmed.ncbi.nlm.nih.gov/32555529/) Publisher: [Nature Biotechnology](https://doi.org/10.1038/s41587-020-0583-3) ## Doudna Lab - Blueprint for Pop-Up SARS-CoV-2 Testing Lab (March 2020 Preprint).pdf [IGI SARS-CoV-2 Testing Consortium. Blueprint for a pop-up SARS-CoV-2 testing lab. *medRxiv*. 2020.](https://doi.org/10.1101/2020.04.11.20061424) Published version: [Nature Biotechnology](https://pubmed.ncbi.nlm.nih.gov/32555529/) ## MADPH - Preprint - Performance and Implementation Evaluation of the Abbott BinaxNOW Rapid Antigen Test (1-9-2021).pdf [Pollock NR, Jacobs JR, Tran K, et al. Performance and implementation evaluation of the Abbott BinaxNOW Rapid Antigen Test in a high-throughput drive-through community testing site in Massachusetts. *J Clin Microbiol*. 2021;59(5):e00083-21.](https://pubmed.ncbi.nlm.nih.gov/33622768/) Preprint: [medRxiv (2021.01.09.21249499)](https://doi.org/10.1101/2021.01.09.21249499) ## Mayer (Fred Hutch) - Preprint - Viral Shedding (8-7-2020).pdf [Goyal A, Reeves DB, Cardozo-Ojeda EF, Schiffer JT, Mayer BT. Viral load and contact heterogeneity predict SARS-CoV-2 transmission and super-spreading events. *eLife*. 2021;10:e63537.](https://pubmed.ncbi.nlm.nih.gov/33620317/) Preprint: [medRxiv (2020.08.07.20169920)](https://doi.org/10.1101/2020.08.07.20169920) ## Nagura-Ikeda - Paper - Clinical evaluation of self-collected saliva by RT-qPCR direct RT-qPCR RT-LAMP and a rapid antigen test (7-7-2020).pdf [Nagura-Ikeda M, Imai K, Tabata S, et al. Clinical evaluation of self-collected saliva by quantitative reverse transcription-PCR (RT-qPCR), direct RT-qPCR, reverse transcription-loop-mediated isothermal amplification, and a rapid antigen test to diagnose COVID-19. *J Clin Microbiol*. 2020;58(9):e01438-20.](https://pubmed.ncbi.nlm.nih.gov/32636214/) Publisher: [ASM / Journal of Clinical Microbiology](https://doi.org/10.1128/JCM.01438-20) ## Nolan - Paper - RT qPCR Testing of SARS CoV 2 a Primer (4-13-2020).pdf [Bustin SA, Nolan T. RT-qPCR testing of SARS-CoV-2: A primer. *Int J Mol Sci*. 2020;21(8):3004.](https://pubmed.ncbi.nlm.nih.gov/32344568/) Publisher: [MDPI (Open Access)](https://doi.org/10.3390/ijms21083004) ## Romer - Preprint - Population-scale testing can suppress the spread of COVID-19 (4-27-2020).pdf [Taipale J, Romer P, Linnarsson S. Population-scale testing can suppress the spread of COVID-19. *medRxiv*. 2020.](https://doi.org/10.1101/2020.04.27.20078329) ## SHIELD UIUC - Preprint - Saliva-Based Molecular Testing for SARS-CoV-2 that Bypasses RNA Extraction (6-18-2020).pdf [Ranoa DRE, Holland RL, Alnaji FG, et al. Saliva-based molecular testing for SARS-CoV-2 that bypasses RNA extraction. *bioRxiv*. 2020.](https://doi.org/10.1101/2020.06.18.159434) ## Technion - Preprint - Evaluation of COVID-19 RT-qPCR test in multi-sample pools (3-6-2020).pdf [Yelin I, Aharony N, Shaer Tamar E, et al. Evaluation of COVID-19 RT-qPCR test in multi-sample pools. *Clin Infect Dis*. 2020;71(16):2073-2078.](https://pubmed.ncbi.nlm.nih.gov/32358960/) Publisher: [Oxford Academic / Clinical Infectious Diseases](https://doi.org/10.1093/cid/ciaa531) ## Tjian - Paper - COVI9-Testing Review (6-16-2020).pdf [Esbin MN, Whitney ON, Chong S, Maurer A, Darzacq X, Tjian R. Overcoming the bottleneck to widespread testing: A rapid review of nucleic acid testing approaches for COVID-19 detection. *RNA*. 2020;26(7):771-783.](https://pubmed.ncbi.nlm.nih.gov/32358057/) Publisher: [RNA / Cold Spring Harbor Laboratory Press](https://doi.org/10.1261/rna.076232.120) ## UIUC SHIELD Preprint - Frequency of surveillance testing necessary to reduce transmission of SARS-CoV-2.pdf [Elbanna A, Goldenfeld N. Frequency of surveillance testing necessary to reduce transmission of SARS-CoV-2. *medRxiv*. 2021.](https://doi.org/10.1101/2021.10.01.21262806) # 1,112 _context-commentary_various-papers.md METADATA last updated: 2026-02-23 RT file_name: _context-commentary_various-papers.md category: various subcategory: papers words: 799 tokens: 1112 CONTENT ## Context This subcategory contains a subset of scientific papers on pandemic testing and screening that FloodLAMP collected during its period of operation (2020–2023). These are papers that do not fit into more specific subcategories — papers-lamp covers LAMP-specific literature. The collection spans several themes. Foundational testing methodology is represented by a primer on RT-qPCR testing (Bustin & Nolan, 2020) and a review of nucleic acid testing approaches for COVID-19 detection (Esbin et al., 2020, from the Tjian lab). Sample collection and alternative specimen types appear in work on saliva-based testing evaluated across multiple methods (Nagura-Ikeda et al., 2020) and a preprint on post-disease divergence in detection between nasopharyngeal swabs and saliva (Turner et al., 2021, from Curative). Scaling capacity through laboratory design and pooling is the topic of The Innovative Genomics Institute's blueprint for a "pop-up" SARS-CoV-2 testing lab (IGI Testing Consortium, 2020, included as both preprint and published Nature Biotechnology versions) and the CLIAHUB rapid-deployment paper and companion how-to guide (Crawford et al., 2020) document efforts to stand up high-throughput testing infrastructure quickly. The Technion preprint on multi-sample pooling (Yelin et al., 2020) addresses efficiency gains under resource constraints. Population-level screening strategy appears in the Romer/Taipale/Linnarsson preprint arguing that population-scale testing can suppress spread (2020), the UIUC SHIELD work on surveillance testing frequency (Elbanna & Goldenfeld, 2021) and on saliva-based molecular testing that bypasses RNA extraction (Ranoa et al., 2020), and viral shedding modeling from the Fred Hutch group (Goyal et al., 2020). Rapid antigen test performance is covered by the Cochrane review (Dinnes et al., 2021) — included here via a Guardian article summarizing its key finding that rapid antigen tests correctly identify only about 58% of asymptomatic infections — and a community evaluation of the Abbott BinaxNOW test (Pollock et al., 2021). An AI literature review was created with ChatGPT 5.2 Pro Extended and is contained in the following archive file. It has a several AI responses that relate to pandemic preparedness and response. _AI_Literature Review - Pandemic Testing and Screening Strategy.md ## Commentary Several papers in this collection have particular significance beyond their scientific content, either because they directly influenced FloodLAMP's founding or because they represent important and sometimes cautionary episodes in the COVID testing story. The Doudna Lab's blueprint for a pop-up SARS-CoV-2 testing lab was the paper that inspired the founding of FloodLAMP in the spring of 2020. The supplement was especially valuable — it provided extensive practical details including regulatory and compliance guidance that was difficult to find elsewhere at the time. It's our understanding that the Doudna Lab's testing operation became the focal point for COVID testing on the UC Berkeley campus. The CLIA HUB paper, documenting the Chan Zuckerberg Biohub and UCSF effort to rapidly deploy high-throughput SARS-CoV-2 testing, is notable both for the ambition of the project and for how it ended. The mobilization of scientists and researchers to build this infrastructure was impressive and can serve as a model for similar efforts. A companion 10 mintue YouTube video ((https://youtu.be/LgRQge1y0U8?si=an_sDhdm5TDlUQzC)) provides overview. The project operated as a CLIA Hub with ambitions to scale and distribute its model more broadly, but it abruptly ceased operations. According to someone in leadership on the project, the reason was regulatory blowback risk. Why this effort was not scaled — why it did not play a larger role in California's or the nation's testing response — deserves attention. Comparing this effort to other university-based testing initiatives would be informative. Curative is worth noting as an entity, not just for its paper on post-disease detection divergence between swabs and saliva. Curative was a biotech startup pivoted to COVID testing by entrepreneur Fred Turner. It grew quickly into a significant component of the testing landscape, but encountered regulatory challenges and conflict with the FDA — episodes that are relevant to understanding the broader dynamics of COVID-era diagnostics. The University of Illinois' SHIELD program was a competitor of sorts to SalivaDirect, with a very similar assay. A comparison of the similarities and differences between the two programs — their approaches, their outcomes, and their respective paths through regulatory and operational challenges — may be a useful exercise for anyone studying decentralized molecular testing. The Tjian paper (Esbin et al., 2020), although published early in the pandemic, was the best comprehensive review of COVID testing we knew of at the time and served as an important reference. Paul Romer's preprint on population-scale testing to suppress spread was published in April 2020, early in the pandemic, when he was one of the most visible advocates for testing as a primary tool for pandemic control. Following his public advocacy and writing throughout the pandemic trajectory — not just this single paper — would be a useful lens for understanding how the role of testing was debated in real time. # 1,152 2021-03-24_Cochrane Antigen Tests Review Link and Guardian Article.md METADATA last updated: 2026-03-06 by BA file_name: 2021-03-24_Cochrane Antigen Tests Review Link and Guardian Article.md file_date: 2021-03-24 title: 2021-03-24_Cochrane Antigen Tests Review Link and Guardian Article category: various subcategory: papers tags: source_file_type: gdoc xfile_type: docx gfile_url: https://docs.google.com/document/d/1Yk72tI6MAdwviqxJlPY-aFczeqgoDiBcQ4EY6SFkRjo xfile_github_download_url: https://raw.githubusercontent.com/FocusOnFoundationsNonprofit/floodlamp-archive-wip/main/various/papers/2021-03-24_Cochrane%20Antigen%20Tests%20Review%20Link%20and%20Guardian%20Article.docx pdf_gdrive_url: https://drive.google.com/file/d/1DTx0e-yfBqbQ6_ZXnpCwa3Kdr08CVzdB pdf_github_url: https://github.com/FocusOnFoundationsNonprofit/floodlamp-archive-wip/blob/main/various/papers/2021-03-24_Cochrane%20Antigen%20Tests%20Review%20Link%20and%20Guardian%20Article.pdf conversion_input_file_type: docx conversion: pandoc license: CC BY 4.0 - https://creativecommons.org/licenses/by/4.0/ tokens: 1152 words: 820 notes: summary_short: The Cochrane review and related March 24, 2021 Guardian coverage summarize evidence that rapid SARS-CoV-2 antigen (lateral flow) tests perform substantially worse in asymptomatic people (reported average sensitivity around 58%) and vary widely by brand. The materials are useful for assessing the risks of using antigen tests for “test-to-release” or to permit school/work attendance, especially in low-prevalence screening where false negatives and false positives can meaningfully affect public health decisions. CONTENT [Cochrane Antigen Review (3-21-2021)
https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013705.pub2/epdf/full](https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013705.pub2/epdf/full) ***INTERNAL TITLE:*** Guardian Article 03-24-21 Rapid lateral flow tests 'should not be used for test and release' **Cochrane review says rapid antigen tests correctly identify only 58% of asymptomatic people** Wed 24 Mar 2021 15.29 EDT Rapid lateral flow tests for Covid do not work well in people with no symptoms and should not on their own be used to allow people to go to work or school or to travel, experts have said. The UK has bought millions of rapid tests which give results within half an hour. Teachers, schoolchildren and their families without any symptoms are being asked to test themselves using the kits twice a week. “About one in three people with coronavirus do not have symptoms but can still pass it on to others,” says [government guidance](https://www.gov.uk/guidance/rapid-lateral-flow-testing-for-households-and-bubbles-of-school-pupils-and-staff). “Regular testing of people without symptoms is important to help stop the virus spreading and protect your loved ones.” However, [a Cochrane review](https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD013705.pub2/full), carried out by a team of international, independent experts, has found that rapid antigen tests – known as lateral flow – correctly identify on average 72% of people who are infected with the virus and have symptoms and 78% within the first week of becoming ill. But in people with no symptoms, that drops to 58%. But there were big differences between brands. In people with symptoms, SD Biosensor, which has been approved by the World Health Organization, picked up 88% of infections, while the Innova test, used very widely in the UK since mass testing in Liverpool, picked up 58%. Little data exists to show how well the tests perform when the person has no symptoms. For Innova, said Jon Deeks, professor of biostatistics at the University of Birmingham and one of the authors, there are only two studies. The Liverpool pilot project showed the test found 40% of people with Covid, and in a very small study he conducted in students at the university, it picked up just 3%. About 40m of the tests have been given out, he said, but in those two studies a total of just 78 people were identified as having Covid. “So I personally find that quite shocking – the government thinking that’s an adequate evidence base upon which to base such a large, expensive and quite invasive policy for people to follow,” he said. There have been no studies on the use of the tests in schoolchildren with no symptoms, he said. “We have no data on the accuracy of these tests in children.” The team identify three possible uses of the tests: testing to detect infection; testing to release people from self-isolation or quarantine; and testing to enable them to go to school, work or an event. “You can’t do such a test for test-to-release or test-to-enable,” said Deeks. People would go back into the community and be socially mixing. The tests, he said, “will probably detect about half the cases”. They create both false negative results – when people are wrongly told they are not infected, and also false positives, when people will unnecessarily be instructed to self-isolate, which could mean they cannot work. In a mass testing situation, said Deeks, among 10,000 people with a prevalence rate of 0.5%, similar to now, you would expect 50 people to have Covid. The tests would pick up 35 of the 50, but 90 would wrongly be told they were infected. Dr Ann Van den Bruel, associate professor of primary care at KU Leuven in Belgium, and an author of the review, said: “The risk of the false positives in the screening setting is very high, and you may end up having the opposite effect of what you want to achieve and you may have to close more workplaces, more classes than what you’re currently doing without a clear effect on the epidemic, which is what we all want, of course.” The tests do have a use among people with symptoms, said the scientists. At a test centre, a hospital or GP surgery, they can give a rapid diagnosis, which means contacts can be traced faster than at present. The positive test must then be confirmed with a PCR swab test, however, they say. The government, however, maintains that the tests are an essential part of the Covid response. Dr Susan Hopkins, Covid-19 strategic response director at Public Health England and chief medical adviser to the Test and Trace programme, said: “As this report highlights, rapid tests are effective at detecting Covid-19 in people that are highly infectious, both with and without symptoms. “They are an absolutely crucial way to help bring down infection rates and keep them low. “Every day, rapid testing is helping us find cases of Covid-19 that we wouldn’t otherwise know about, breaking chains of transmission and potentially saving lives. Please do take a test if you’re offered one – the more we test, the more cases we will find.”