View Single Post
Old 04-13-2024, 07:53 AM
  #49  
Excargodog
Perennial Reserve
 
Excargodog's Avatar
 
Joined APC: Jan 2018
Posts: 11,583
Default

OK Rick, I cheated and used an online calculator because it has been way too long since my stat corses but here is what I got. To determine at the commonly accepted statistical norms if a group having an incidence of 11 events per thousand ( a 10% increase) is statistically different from a population based control group having 10 events per thousand the calculator indicates you would need an n (that is number of subjects studied) of 807+ THOUSAND subjects.


https://i.ibb.co/qYR8RRz/IMG-7197.jpghttps://ibb.co/fdCWCCb]https://i.ibb.co/qYR8RRz/IMG-7197.jpg


About This Calculator

This calculator uses a number of different equations to determine the minimum number of subjects that need to be enrolled in a study in order to have sufficient statistical power to detect a treatment effect.1

Before a study is conducted, investigators need to determine how many subjects should be included. By enrolling too few subjects, a study may not have enough statistical power to detect a difference (type II error). Enrolling too many patients can be unnecessarily costly or time-consuming.

Generally speaking, statistical power is determined by the following variables:
  • Baseline Incidence: If an outcome occurs infrequently, many more patients are needed in order to detect a difference.
  • Population Variance: The higher the variance (standard deviation), the more patients are needed to demonstrate a difference.
  • Treatment Effect Size: If the difference between two treatments is small, more patients will be required to detect a difference.
  • Alpha: The probability of a type-I error -- finding a difference when a difference does not exist. Most medical literature uses an alpha cut-off of 5% (0.05) -- indicating a 5% chance that a significant difference is actually due to chance and is not a true difference.
  • Beta: The probability of a type-II error -- not detecting a difference when one actually exists. Beta is directly related to study power (Power = 1 - β). Most medical literature uses a beta cut-off of 20% (0.2) -- indicating a 20% chance that a significant difference is missed.
The CDC study basically was comparing the results of the 24 dead immunized people against the 34 dead non immunized people. That pretty much lacked ANY power to make a determination of even a 10% difference in incidence, even if death certificate data were not known to be extremely inadequate and since the null hypothesis inherent in statistical analysis defaults to NOT finding a difference there was never any chance of a positive result.
Excargodog is offline