It's got to be (im)perfect

Clinical Adviser Daniel Hardiman-McCartney and Head of Research Martin Cordiner discuss why you should keep your referral worries in perspective.

Share options

Author: Daniel Hardiman-McCartney MCOptom, Clinical Adviser, and Martin Cordiner, Head of Research
Date: 23 February 2016

How good are you at spotting glaucoma, AMD or diabetic retinopathy? If we were to do a questionnaire of College members, we might end up with the classic finding that is often found in relation to drivers, in that more than half describe themselves as better than average. But what does ‘good’ actually mean?

When to refer, and indeed when not to refer, might be one way of rating your ability. We’re guessing that within the last month or two, you may have had a casual conversation with a colleague in which you recalled a situation where it was a bit of a struggle to decide whether to refer or not. A borderline case? An ambiguous visual field? Although the guidelines give specific instructions, you can’t always be 100% certain, and you can’t always be 100% correct. 

The important point is that there is always a trade off between sensitivity and specificity, because it is not feasible to have a perfect test (one that is right 100% of the time).

In fact, mathematically, 100% of referrals will almost certainly not be correct. This is because of the delicate balance between two principles called sensitivity and specificity (covered fully in the Optometry Tomorrow 2016 session, ‘Seeing Red: interpreting abnormal OCT images’, to be presented by David Sculfor on Monday 14 March at 3.45pm).

So how ‘good’ you are could be summed up by how good you are at spotting a condition and how good you are at correctly saying that it is not there. In other words, how many false positives (you say it’s there and it isn’t) and false negatives (you say it’s not there but it is) you produce. Sensitivity is the measure of true positives and specificity is the measure of true negatives. And so....sensitivity quantifies the avoiding of false negatives and specificity the avoiding of false positives. And yes, we did have to remind ourselves which is which by looking them up.

If we consider diabetic retinopathy as a practical example, a recent study of 8,977 patients found that optometrists reviewing retinal photographs for sight threatening retinopathy had a sensitivity and specificity of 78.2% and 98.1%, a particularly high score, but definitely not 100%.

It also found that, in a comparison between the decisions of two optometrists, there was notable variation at both normal and lower levels of retinopathy.

The important point is that there is always a trade off between sensitivity and specificity, because it is not feasible to have a perfect test (one that is right 100% of the time). If you really, really want to avoid false negatives, such as in airport security, then you’re going to get some false positives. Your threshold for triggering a positive (i.e. this person needs to be looked at more closely) is very low (a belt buckle or a metal coin). On the other hand, if you are anxious to avoid false positives (maybe with drug-testing in sport, where you need to be pretty sure) then your test will have a higher threshold, meaning there will be more false negatives (people who were found to be using banned substances in some other way but were not positive as a result of this particular test).

And for optometry, with increasingly sensitive diagnostics available, this issue presents itself. As we increase diagnostic sensitivity for conditions, particularly for those which have a low incidence, we typically reduce the test specificity, resulting in potential harm through more false positives, both because of patient anxiety and the resources of an unnecessary referral.

So, if no test is perfect, the key thing is to work out what is the appropriate threshold for the given situation, and that’s why the profession uses the tests it does for each condition. In diabetic screening that threshold has been suggested to be 80% sensitivity and 95% specificity. We don’t want to refer everyone to the Hospital Eye Service, but we also don’t want anyone’s condition to be missed at a key point. We can add into the equation the individual patient and their history, the likelihood of a condition (see our previous blog about glaucoma) and any other relevant factors to make an overall decision.

We can all improve our skills, sure. But we cannot all be correct 100% of the time, and that’s okay. Actually, it’s almost certainly unavoidable.



Grader agreement, and sensitivity and specificity of digital photography in a community optometry-based diabetic eye screening program (NCBI, 17 July 2014)

Sensitivity and specificity of the Swedish interactive threshold algorithm for glaucomatous visual field defects (NCBI, June 2002) 

Relative risk calculator (MedCalc)  

Daniel Hardiman-McCartney FCOptom
Clinical Adviser, The College of Optometrists

Daniel graduated from Anglia Ruskin University, where he won the Haag Strait prize for best dissertation. Before joining the College, he was Managing Director of an independent practice in Cambridge and a visiting clinician at Anglia Ruskin University. He has also worked as a senior glaucoma optometrist with Addenbrooke’s Hospital in Cambridge, with Newmedica across East Anglia and as a diabetic retinopathy screening optometrist. Daniel was a member of Cambridgeshire LOC from 2007 to 2015 and a member of the College of Optometrists’ Council from 2009 to 2014, representing its Eastern region.  

He is Clinical Adviser to the College of Optometrists for four days each week, dividing the remainder of his time between primary care practice and glaucoma community clinics. Daniel is a passionate advocate of the profession of optometry, committed to supporting all members of the profession and ensuring patient care is always at the heart of optometry. He was awarded Fellowship by Portfolio in December 2018.

Martin Cordiner
Head of Research, College of Optometrists

Martin graduated with a Masters in Modern History from York University in 2005, having completed his BA there in 2003. Since then he has worked in project management in higher education before joining the College and its fledgling research department in 2009, where he now supports the Director of Research and manages the research team to implement all elements of the College’s Research Strategy. 

Return to blog listings