The cornerstone of any “endemic COVID” concept, including California’s newly-announced not-yet-a-plan, will have to be testing. Lots and lots of testing.
And more effective testing than we’ve managed to do so far.
For example, the local school district runs weekly tests on a strangely variable number of students. Some weeks it looks like anywhere from 1/5 to 3/4 of the students get tested.
Even if all the students were tested every week this would still leave an entire week in which a sick student could be infecting classmates, of course, so that’s already pretty ineffective.
They do the tests in pools of 5 students, to obscure the identity of any positive results. If a pool tests positive, they re-test the students in that pool to try to identify individual ill students.
This is a really bad testing design.
The logic is fine; the issue is that the rapid tests they’re using aren’t good enough to be used this way.
These tests are pretty trustworthy if you get a positive result (you actually have the virus), but not if you get a negative result (you may have the virus anyway). Specifically (in asymptomatic children), if you get a positive result you have a 98.6% chance of actually having the virus, but if you do have the virus you only have a 56.2% chance of getting a positive result.
That’s basically flipping a coin.
That’s a very high false-negative rate.
What this means in the case of our local school district is that any pool that tests positive almost certainly has someone in it with the virus, but a pool that tests negative could very well also have someone in it with the virus.
You just don’t know.
Each of the pools with a sick student in them has a 56.2% chance of getting a positive, so you’ll only detect about half of the actual positives in the pooled tests.
It gets worse.
If a pool tests positive, the school re-tests those students to try to identify the individual source but if none of them come up positive on the second test they treat the whole pool as negative.
Despite the fact that at least one student in it is almost certainly positive and on the second test it’s basically flipping a coin to see if you’ll find them.
This testing design intentionally misses around 3/4 of the actual cases (specifically, it will correctly identify 31.6% of cases).
(EDIT: a Board member informed me that the first-round test is a PCR, not a rapid, test so these numbers aren’t correct for the District’s actual testing setup. That’s good! I’ll add a comment below with updated numbers.)
That’s … not good.
The local county has about 1.5% of the population known to be positive right now. If the schools have the same rate, that’d be around 10 students in the local school district.
(It’s probably lower, since rates among children are generally lower than in the general population.)
This week’s tests results had 1 positive pool, in which all 5 tested negative on the second round.
So, we likely had around 10 cases among the students of which the district correctly identified none.
This is not the kind of testing that we can trust to manage “endemic COVID”.
One thought on “Endemic Failure: Rapid COVID Tests Don’t Work Very Well”
So, with a first-round PCR test how do the numbers change?
According to the best paper I can find, the PCR tests have around 20% false negatives (very much depending on when in the symptom cycle the samples are taken).
With that, the first-round would catch 80%, instead of 56% of the cases. This would give an overall success rate of around 45%.
So quite a bit better, but still catching less than half.