# Kappa

Kappa is a coefficient that measures the proportion of agreement above that expected by chance. Kappa calculations are shown below.

## Notes on Kappa

• Kappa may be used as a general measure of agreement for Nominal data
• Calculation methods exist for two or more categories, two or more repeated assessments, and one or more appraisers
• Tests for the significance of Kappa are available
• Confidence intervals for Kappa are available
• Kappa can vary between a -1.0 (complete disagreement) to +1.0 (complete agreement) for symmetrical tables
• Negative Kappa values indicate the level of agreement was below that expected by chance
• The maximum value of Kappa is a function of the symmetry of the table and the differences in the category proportions between appraisers

## Calculations

### — One Appraiser— Two Categories— Two Repeated Assessments

Let:
N = Number of Items
M = Number of Repeated Assessments
C = Number of Categories

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

The maximum value of Kappa given the observed lack of symmetry

• KMax is the maximum value Kappa can attain if the only disagreement is that found by the lack of symmetry
• KMax is one minus the proportional difference found off diagonal
• KMax will equal Kappa when the number above or the number below the diagonal is zero
• 1-KMax is the loss in agreement above chance due to non-symmetry

The standard error to test if Kappa is equal to zero (No Agreement)

The significance of Kappa is tested with a z-score

Kappa Confidence Interval Calculations

### — One Appraiser— Two Categories— Three or More Repeated Assessments

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

Used to calculate Kappamax

The standard error to test if Kappa is equal to zero (No Agreement)

The standard error used for Kappa confidence intervals

Kappa confidence interval

### — Two Appraisers— Two Categories— One Assessment Each

Let:
N = Number of Items
A = Number of Appraisers
C = Number of Categories

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

The standard error to test if Kappa is equal to zero (No Agreement)

The significance of Kappa is tested with a z-score

The standard error used for Kappa confidence intervals

Kappa confidence interval

## G Index

nxy = The count of Category x, for Appraiser y or Standard s

This is the statistic used to test for overall Concordance. It is evaluated as a z test statistic.