Kappa is a coefficient that measures the proportion of agreement above that expected by chance. Kappa calculations are shown below.

- Kappa may be used as a general measure of agreement for Nominal data
- Calculation methods exist for two or more categories, two or more repeated assessments, and one or more appraisers
- Tests for the significance of Kappa are available
- Confidence intervals for Kappa are available
- Kappa can vary between a -1.0 (complete disagreement) to +1.0 (complete agreement) for symmetrical tables
- Negative Kappa values indicate the level of agreement was below that expected by chance
- The maximum value of Kappa is a function of the symmetry of the table and the differences in the category proportions between appraisers

— Two Categories

— Two Repeated Assessments

Let:

N = Number of Items

M = Number of Repeated Assessments

C = Number of Categories

M = Number of Repeated Assessments

C = Number of Categories

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

The maximum value of Kappa given the observed lack of symmetry

- K
_{Max}is the maximum value Kappa can attain if the only disagreement is that found by the lack of symmetry - K
_{Max}is one minus the proportional difference found off diagonal - K
_{Max}will equal Kappa when the number above or the number below the diagonal is zero - 1-K
_{Max}is the loss in agreement above chance due to non-symmetry

The standard error to test if Kappa is equal to zero (No Agreement)

The significance of Kappa is tested with a z-score

Kappa Confidence Interval Calculations

— Two Categories

— Three or More Repeated Assessments

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

Used to calculate Kappa_{max}

The standard error to test if Kappa is equal to zero (No Agreement)

The standard error used for Kappa confidence intervals

Kappa confidence interval

— Two Categories

— One Assessment Each

Let:

N = Number of Items

A = Number of Appraisers

C = Number of Categories

A = Number of Appraisers

C = Number of Categories

Proportion of items in which agreement occurred

The sum of the products of each classification proportion

The standard error to test if Kappa is equal to zero (No Agreement)

The significance of Kappa is tested with a z-score

The standard error used for Kappa confidence intervals

Kappa confidence interval

n_{xy} = The count of Category x, for Appraiser y or Standard s

This is the statistic used to test for overall Concordance.
It is evaluated as a z test statistic.