Statistical Power: The Fishing Net Analogy
Understand how sample size affects your ability to detect real effects in your data.
The Scenario
Imagine you're a researcher (fisher) trying to prove that green fish exist in an ocean full of blue fish. Your net represents your sample size. The question is: How big does your net need to be to reliably catch green fish if they exist?
The Ocean (Population)
Your Catch
Fish Caught
Blue Fish
Green Fish
Hypothesis Supported?
Statistical Power Analysis
β Excellent Power! β Moderate Power β Low Power - Risk of Missing Real Effects
With this sample size and effect size, you have a % chance of detecting green fish if they truly exist. This exceeds the commonly recommended 80% threshold. You have a % chance of detecting green fish. Consider increasing your sample size to reach the recommended 80% power threshold. With only a % chance of detection, you might conclude there are no green fish even when they exist (Type II error). You need a larger net!
Simulation Results (1000 fishing trips)
Times Green Fish Caught
(% success rate)
Times Missed (Type II Error)
(% false negative rate)
Key Statistical Concepts
Sample Size (n)
The number of observations in your study. Larger samples = more power to detect effects.
Effect Size
How large/obvious the phenomenon is. Bigger effects are easier to detect with smaller samples.
Statistical Power
The probability of detecting an effect when it truly exists. We typically want β₯80% power.
Type II Error (Ξ²)
Missing a real effect (false negative). Low power = high Type II error risk.
π The Takeaway
Before conducting a study, always perform a power analysis to determine the sample size needed to detect your expected effect. An underpowered study is like fishing with a tiny netβyou might miss the green fish entirely and wrongly conclude they don't exist. This is why non-significant results from underpowered studies should be interpreted with caution: absence of evidence is not evidence of absence.