Statistical Power: The Fishing Net Analogy

Understand how sample size affects your ability to detect real effects in your data.

The Scenario

Imagine you're a researcher (fisher) trying to prove that green fish exist in an ocean full of blue fish. Your net represents your sample size. The question is: How big does your net need to be to reliably catch green fish if they exist?

Small (n=5) Medium (n=50) Large (n=100)
Rare (1%) Moderate (15%) Common (30%)

The Ocean (Population)

Your Net (n=)

Your Catch

Fish Caught

Blue Fish

Green Fish

Hypothesis Supported?

Statistical Power Analysis

Power (Probability of detecting green fish if they exist)
0% (Never detect) 80% (Recommended minimum) 100% (Always detect)

βœ“ Excellent Power! ⚠ Moderate Power βœ— Low Power - Risk of Missing Real Effects

With this sample size and effect size, you have a % chance of detecting green fish if they truly exist. This exceeds the commonly recommended 80% threshold. You have a % chance of detecting green fish. Consider increasing your sample size to reach the recommended 80% power threshold. With only a % chance of detection, you might conclude there are no green fish even when they exist (Type II error). You need a larger net!

Simulation Results (1000 fishing trips)

Times Green Fish Caught

(% success rate)

Times Missed (Type II Error)

(% false negative rate)

Key Statistical Concepts

πŸ₯…

Sample Size (n)

The number of observations in your study. Larger samples = more power to detect effects.

πŸ“Š

Effect Size

How large/obvious the phenomenon is. Bigger effects are easier to detect with smaller samples.

⚑

Statistical Power

The probability of detecting an effect when it truly exists. We typically want β‰₯80% power.

❌

Type II Error (Ξ²)

Missing a real effect (false negative). Low power = high Type II error risk.

πŸ“š The Takeaway

Before conducting a study, always perform a power analysis to determine the sample size needed to detect your expected effect. An underpowered study is like fishing with a tiny netβ€”you might miss the green fish entirely and wrongly conclude they don't exist. This is why non-significant results from underpowered studies should be interpreted with caution: absence of evidence is not evidence of absence.