Quote:
Originally Posted by John21
Or why the LA and Stanford numbers are insufficient to extrapolate an IFR from. What we do know is that the 65+ demo accounts for somewhere around 75% of CV deaths. So failing to accurately represent that demo in a sample could produce huge swings when projecting an IFR in terms of a homogeneous spread of the virus throughout the population. Iirc, the Stanford study didn’t balance for age at all so we really can’t project an IFR from it. With the LA study of 863 participants, since 14% of LA County is 65+, that would work out to ~120 participants in the 65+ demo, if their study followed suit. So what’s the chance those 120 accurately represent the 1.4M 65+ LA population both in terms of age and comorbidities?
There are so many reasons the study is only a weak data point:
- There exists a complete lack of reliable false negative data from this test, a test which does have false positives. The only thing we have is manufacturer reported data (who wants to sell their new product they just spent money developing!) that comes out at >0.5% false positive (2 out of 387). Totally inadequate sample to establish the false negative rate and a totally unreliable source. It wouldn't matter if the infection rate found was 20%, but it's 4%. Who knows what conditions cause false positives and at what rate? This isn't a mature product and all we know is that there are some. That alone almost completely invalidates the study.
- The guy who did the same study in Stanford did a clownish level population sample (a FB post!!!) and analysis imo. He claims proper randomization/sampling on this one, but:
- He got the participants from a company that provides people signed up to participate in real world surveys/tests and such. These are strongly skewed toward people who are more socially active. Who knows who opted out and who didn't and why. Are these people representative of the exposure levels of the average person?
Put it all together and it's just nonsense. False positive rate double reported and exposure bias double reported and suddenly you have an IFR of 0.7%.
It's a weak data point and people putting much faith in this are not very bright, particularly when we have strong (population-level) data points showing >1% IFR.