Kappa Statistic

Kappa

Kappa is a coefficient that measures the proportion of agreement above that expected by chance. Kappa calculations are shown below.

Notes on Kappa

  • Kappa may be used as a general measure of agreement for Nominal data
  • Calculation methods exist for two or more categories, two or more repeated assessments, and one or more appraisers
  • Tests for the significance of Kappa are available
  • Confidence intervals for Kappa are available
  • Kappa can vary between a -1.0 (complete disagreement) to +1.0 (complete agreement) for symmetrical tables
  • Negative Kappa values indicate the level of agreement was below that expected by chance
  • The maximum value of Kappa is a function of the symmetry of the table and the differences in the category proportions between appraisers

Calculations

Kappa Calc

Kappa Explanation

— One Appraiser
— Two Categories
— Two Repeated Assessments

Let:
N = Number of Items
M = Number of Repeated Assessments
C = Number of Categories

P Agreement1-2-2
Proportion of items in which agreement occurred

P Chance Agreement
The sum of the products of each classification proportion

Pom

Kappa Max
The maximum value of Kappa given the observed lack of symmetry

  • KMax is the maximum value Kappa can attain if the only disagreement is that found by the lack of symmetry
  • KMax is one minus the proportional difference found off diagonal
  • KMax will equal Kappa when the number above or the number below the diagonal is zero
  • 1-KMax is the loss in agreement above chance due to non-symmetry

Standard Error Kappa
The standard error to test if Kappa is equal to zero (No Agreement)

z-score Kappa
The significance of Kappa is tested with a z-score

Kappa Confidence Interval Calculations
Kappa Confidence Interval Calculations

— One Appraiser
— Two Categories
— Three or More Repeated Assessments

P Agreement1-2-2
Proportion of items in which agreement occurred

P Chance Agreement
The sum of the products of each classification proportion

Pom
Used to calculate Kappamax

SE Kappa
The standard error to test if Kappa is equal to zero (No Agreement)

SE Kappa for CI
The standard error used for Kappa confidence intervals

CI
Kappa confidence interval

— Two Appraisers
— Two Categories
— One Assessment Each

Let:
N = Number of Items
A = Number of Appraisers
C = Number of Categories

P Agreement1-2-2
Proportion of items in which agreement occurred

P Chance Agreement
The sum of the products of each classification proportion

Pom

Standard Error Kappa
The standard error to test if Kappa is equal to zero (No Agreement)

z-score Kappa
The significance of Kappa is tested with a z-score

SE Kappa for CI
The standard error used for Kappa confidence intervals

CI
Kappa confidence interval

G Index

CI
nxy = The count of Category x, for Appraiser y or Standard s

CI

CI
This is the statistic used to test for overall Concordance. It is evaluated as a z test statistic.