π Kappa Value
The Kappa value (Cohenβs Kappa) is a statistical measure of interrater agreement for categorical data, adjusted for the amount of agreement that could occur by chance.
It quantifies how consistently two or more observers classify the same items beyond random coincidence.
π§ Key Characteristics
- Ranges from β1 to 1:
- 1 = Perfect agreement
- 0 = Agreement equivalent to chance
- < 0 = Worse than chance (systematic disagreement)
- More robust than simple percent agreement because it accounts for expected agreement due to randomness.
π Interpretation Guide
Kappa (ΞΊ) Value | Strength of Agreement |
ββββββ | βββββββββββ- |
< 0 | Poor (less than chance) |
0.01 β 0.20 | Slight agreement |
0.21 β 0.40 | Fair agreement |
0.41 β 0.60 | Moderate agreement |
0.61 β 0.80 | Substantial agreement |
0.81 β 1.00 | Almost perfect agreement |
π Example
If two neuroradiologists independently evaluate MRI scans for presence of DVA and reach the same conclusion 76% of the time, but much of that agreement is expected by chance, the Kappa might only be 0.51, indicating moderate agreement.
β Best Practice
- Use Kappa to assess reliability of diagnostic tools or observer consistency.
- Always report Kappa with confidence intervals and p-values for context.
- For ordinal scales, consider using weighted Kappa.