Neel Sharma identifies some areas where future educational leaders should make improvements
As a medical trainee and educationalist I have witnessed significant movement in medical education. Decades passed when there was little movement, but we are now in constant flux.
I worry about the movements taking place. Is movement happening for movement’s sake? Is movement happening in order to cement medical education as an academic discipline? As the old saying goes, if it ain’t broke, don’t fix it.
Medical education is centred on teaching, learning and assessment. Schumacher et al described the importance of teaching and learning with the aim to develop the master learner. Their elements included: ensuring team relatedness, learner autonomy, feedback, safe learning environments, continuity of learning, self-directed learning, reflection, learning personalisation, and self as well as peer assessment.
Can we admit to ensuring these expectations? I guess most learners would experience self-directed learning in the mad dash before exam day. However, how often does learner relatedness and autonomy occur?
Often pedagogy is delivered in a blanket fashion with one form of instruction across institutions
Is feedback as common a practice as we would hope for? Comparing internal medicine exam approaches in the US and UK as an example, feedback is minimal, with the American Board of Internal Medicine and Membership of the Royal Colleges of Physicians of the United Kingdom offering only content area scores but no actual question-by- question feedback. The clinical component of the MRCP PACES also only focuses on a content station score breakdown – no trainee feedback on actual patient pathology.
Evidence highlights concerns about learners’ engagement with the process, including time pressures and lack of educator support. We note learners achieve competency at different rates. Yet how much personalised learning is taking place? Often pedagogy is delivered in a blanket fashion with one form of instruction across institutions (eg a predominance of problem-based learning or team-based learning). Utilisation of a particular form of instruction may not be congruent with an individual’s learning style.
And despite the desire for programmatic assessment with more emphasis on the continuous capture of a learners’ progress and greater holistic judgements, we are still far from this happening.
There is currently a push towards an over assessment culture. In the UK (with similar talks occurring in the US) the situational judgement test (SJT) has been brought in to measure exiting students’ professional attributes. Yet we are still lacking evidence for its long-term worth.
Report analysis notes that the correlation between a candidates’ educational performance measure (EPM – comprising medical school performance, additional degrees and publications) and SJT scores is poor. For the 2013 applicant cohort a weakly positive correlation was noted between the EPM and SJT scores n=8,127 (rs .30). Correlations with the EPM in 2014 was noted at r=.30 and in 2015 r=.34.
In order to understand the true issues of medical education and aim to solve them shouldn’t we be inviting students and trainees to lead the way?
Further analysis has shown no significant differences between SJT or EPM scores for those who achieved satisfactory ARCP (annual review of competence progression) outcomes compared to those who received unsatisfactory outcomes. For those who achieved lower SJT scores, the SJT showed a moderate relationship with F1 performance, with a Pearson correlation of 0.54. For higher SJT scorers there was a small performance relationship with the SJT with a poor Pearson correlation of 0.20.
When exiting students enter real life practice there are transitional concerns. The evidence for this centres on the lack of preparedness for acute care situations, the ward environment, practical procedures, decision making, prioritisation, performing under stress, transferring knowledge to practice and overall responsibilities (8, 9, 10). Therefore are the current teaching, learning and assessment strategies ensuring readiness for working life?
We note from conferences, a large proportion of non-clinician educators deliver plenary sessions, symposia or workshops. Surely this is problematic. In order to understand the true issues of medical education and aim to solve them shouldn’t we be inviting students and trainees to lead the way? We highlight the value of a needs assessment before undertaking a valued research study.
Yet how often do we actually do this in practice? It seems an idea takes hold and institutions globally follow under the assumption that we need to suddenly revamp our pedagogical methods. Yet have students performed poorly under previous methods? And are students themselves requesting a new form of pedagogy? What exactly is the true relationship between particular forms of pedagogy and long term performance?
The next question centres on how often we include patients in our educational reform decisions? Surely we should question patients about their concerns of how doctors are being trained and ask what is it they feel is needed to be improved. The UK’s General Medical Council in 2012 noted a rise in complaints against doctors, specifically in relation to poor communication and lack of respect.
In the US, a breakdown in physician patient communication is also demonstrated in 40 per cent or more of malpractice suits. This finding is noteworthy, particularly in view of our often overemphasis on knowledge and skill acquisition during medical training as opposed to the array of what is deemed soft skills.
Whilst movement has occurred in medical education it appears to not be solving the problems in real life working environments. We need greater transparency in medical education and greater involvement from learners on the shop floor as well as patients on the receiving end of care. Only then can great things happen.
Dr Neel Sharma is a gastroenterology trainee and has worked in medical education across the UK, Asia and the US