Agreement By Chance

We find that in the second case, it shows a greater similarity between A and B than in the first. This is because, although the percentage of concordance is the same, the percentage of concordance that would occur “by chance” is significantly higher in the first case (0.54 versus 0.46). A recent study [12] examined Inter Rater`s compliance for a specific magnetic resonance imaging (MRI) sequence in 84 children who, for one reason or another, had undergone an MRI in a large public hospital. Two radiologists, blind to each other`s assessments, reported all the lesions they identified in each patient. A third radiologist linked these independent readings and identified all obvious lesions and diagnoses that are consistent and contradictory. A total of 249 different lesions were identified in 58 children (the other 26 had normal MRI); 76 were contradictory and 173 concurring (Table 2). The disagreement is due to the quantity, because the allocation is optimal. Kappa is 0.01. We can see from the next edition that the “Simple Kappa” gives kappa`s estimated value of 0.389 with its asymptotic default error (ASE) of 0.0598. The difference between compliance with the agreement and the expected independent concordance is about 40% of the maximum possible difference. Based on the recorded 95% confidence interval, $kappa$ falls somewhere between 0.27 and 0.51, indicating only a moderate match between Siskel and Ebert. It should be noted that the equation for KFR corresponds to the proportion of specific (positive) concordance described by Fleiss [9].

If the equation is the same, the purpose and interpretation are different. For Fleiss, a specific positive agreement (and also a specific negative agreement) is a complementary statistic that improves the interpretation of the overall agreement. The omission of two negative remarks is an a priori decision. . . .