Performance Appraisal Inside the Medical Club Part 4

The Appraisal Ritual within the Medical Club: ‘Bureaucratic Accountability’ and ‘Paperwork Compliance’

Appraisal requires doctors document their decisions and the reasons for them in a manner that is open to collegiate surveillance and correction (Harrison and Dowswell 2002). Interviewees admitted that portfolio appraisal leaves a ‘paper trail’. In doing so, it places them under greater surveillance and control from medical elites, such as medical schools, postgraduate deaneries and Royal Colleges. As Dr Yellow (Surgeon) points out:

‘With portfolios being used more and more like they are you could argue that there is a ‘paper trail’ now leading up from medical school and junior doctor levels to the senior registrar and consultant level. And I really don’t think we are far off having some kind of administrative system that joins all these portfolios together so if I have signed off my house officer as competent, but a later on he has problems and messes up, then the colleges or GMC could come knocking on my door with questions’.

From interviewees accounts it became clear that this ‘paper trail’ produced a situation whereby knowledge that appraisal records can be inspected was enough proof for doctors to act as if they would be inspected. This encouraged doctors to autonomously and independently self-discipline themselves and change their behaviour (Foucault 1977, Rose and Miller 1992). In terms of how interviewees supervise and assess trainees, as well as how they monitor their own continued ‘fitness to practice’ in preparation for Annual Appraisal. As the respective extracts from Dr Gold (Physician) and Dr Green (General Practitioner) illustrate:


‘Having to use a portfolio to help me plan and record student assessment has changed in some ways how I go about supervising them…because the portfolio approach requires I fill in forms saying I have tested them on such things I am much more focused now on making sure their clinical and communication skills are thoroughly tested, even if I do do it my own way’.

Dr Gold

‘I would say that (annual) appraisal can make you a little more focused upon making sure that you are using clinical protocols and guidelines at a day to day level. Because, well, it’s like having done one appraisal you realise what you can do to make the next one less of a hassle. At least at a basic administrative level of form completion. And that is bound to have some knock on affects on clinical practice’.

Dr Green

Dr Gold and Dr Green both note that the implementation of portfolio based performance appraisal has caused them to alter some aspects of their behaviour. However, interviewee’s accounts also showed that these changes were largely highly superficial and had little true effect on how they went about their educational activities. For example, portfolio appraisal draws upon a rationalistic-bureaucratic discourse of outcomes based performance assessment utilising explicit standards to establish norms to govern what is reasonable, desirable and efficient (Challis 1998, Searle 2000). Supervisors are expected to use these standards when judging a trainees’ clinical performance, as well as their own practice achievements in Annual Appraisal (Snadden and Thomas 1999, Southgate 2001). Yet interviewees’ accounts revealed that they were not prepared to accept these norms and standards without question. Furthermore, they were not prepared to accept them without interpretation of their meaning and appropriate application, given that they see themselves as experts in their medical specialty. As Dr White (General Practitioner) notes:

‘The competency domains that are listed in the student portfolio try to be prescriptive. By saying that students need to be able to do a, b, c and d clinically by the end of a placement. As well as that I need to make sure that they can do a, b, c and d to such and such standard. But try as they (the medical school) might they can’t turn core practice skills like taking a good history into a simple yes/no ‘tick box’ exercise…it is still down to you to judge how well they have done…and nine times out of ten you really are just fitting what you have done yourself to test them into the appropriate box on the portfolio forms’.

As Dr Turquoise (Surgeon) comments below show, interviewees were not prepared to allow themselves to be judged without judging the utility of the process their judgers were using to judge them:

‘I certainly made sure I got across how utterly worthless I thought the whole appraisal process was. Come to think of it, that is probably why the meeting only lasted around fifteen minutes and mainly involved "John’ (his clinical director) apologising for the excessive amount of paperwork involved (laughs)’.

Doctor’s accounts may have emphasised how clinical placements for trainees are more structured, given recent reforms in undergraduate medical education and junior doctor training (BMA 2005). They may also have shown how medical elites are increasingly providing prescriptive ‘outcome based’ standards by which to judge clinical performance (Searle 2000, Wilkinson 2002). In doing so, they highlighted how the implementation of portfolio based performance appraisal can be said to be linked with this growing process of formalisation and standardisation across clinical education and provide support for the restratification thesis (Challis 1999, Southgate 2001). However, interviewee’s accounts also reinforced how within medicine a significant degree of emphasis is placed upon gaining direct personal experience of clinical phenomena, in order to develop that ‘ineffable’ occupational quality, ‘clinical acumen’. What is more, interviewees accounts also showed how this emphasis upon gaining clinical experience leads ‘rank and file’ doctors to resist the codified ‘rules and procedures’ associated with the new rationalistic-bureaucratic governing regime being imposed upon them by medical elites (Armstrong 2002).

Based upon interviewee’s accounts, three broad camps emerged in terms of how far their actions conformed to the requirement that they meet with trainees at certain points during clinical placements, as well as use the competency domains and performance criteria contained within a trainee’s portfolio documentation to initially agree, and then subsequently record the achievement of, learning goals. First, and most dominant, were the ‘non-compliers’ (nineteen out of forty-six interviewees). These individuals may support the need for changes in medical training, and be passionate about their medical specialty and the supervision of trainees, but they ignored completely a trainees’ portfolio when assessing them. They preferred instead to do things their own way and ‘sign off a trainees’ portfolio documentation at the end of the placement i.e. ‘I don’t tend to look at their portfolios really. There really is no need for me to do that past scanning it to make sure all the boxes have been ticked and the right pieces of paper completed and signed at the end of the placement… When they (a trainee) turns up on day one I tell them what I expect of them and will be looking out for, and I tend to just fill in the portfolio paperwork around those expectations of them…the portfolios are not for me but for the medical school or the deanery; so they can say they have met whatever political requirements they must these days’ (Dr Violet, Physician).

‘Non-compliers’ did not use a portfolio to help them manage and undertake students assessment. Typically, trainee’s portfolios were viewed as ‘paper tasks’ that ‘we have been asked to complete by the medical school and deanery so they can say they have up to date records when the GMC comes to visit’ (Dr Red, Surgeon). This was a viewpoint shared by the second group, the ‘minimalists’. Like ‘non-compliers’, ‘minimalists’ (seventeen out of forty six interviewees) viewed portfolios as tools introduced by the medical school and postgraduate deanery to meet the changing training and regulatory requirements of the broader socio-political context surrounding medical governance. Unlike ‘non-compilers’, they reported that they held some, albeit highly informal and often irregular, one to one meetings with students to check on their progress. They had some ‘minimal’ contact with a trainees’ portfolio during a clinical placement. Unlike their ‘non-complier’ counterparts who only saw a trainees portfolio at the end of a placement. ‘Minimalists’ reported that having minimal contact with trainee’s portfolios helped them check on an ongoing basis that the paperwork was up to date. This they felt made its completion an easier task i.e. ‘I do try to arrange things so that we meet and record what is happening in the portfolio as I think it is important to keep a record of their progress as they go along…I find that approach is liked by students as it makes them feel that you are keeping an helpful eye on them. And it certainly makes checking and completing portfolio documentation less of an onerous task than it is when you try to do everything all at once at the end’ (Dr Yellow, Surgeon).

‘Minimalists’ tend to ‘use the portfolio to make sure they [students and junior doctors] have completed the necessary sections as they go along, so we don’t end up doing it all at the end, as that can be very time consuming’ (Dr Brown, General Practitioner). Like the ‘minimalists’, ‘enthusiasts’ (ten out of forty six interviewees), held meetings with trainees during a clinical placement. However, with ‘enthusiasts’ these were more formally planned to take place at the beginning, the middle and the end of a trainees’ clinical placement. They were enthusiastic about the use of a portfolio to record student progress, arguing, ‘the portfolio approach is useful as it helps you keep track of a junior or student’ (Dr Orange, Physician). They felt that this reinforced to trainees that they must keep a record of their activities. Unlike the ‘non-compilers’ and ‘minimalists’, ‘enthusiasts’ used portfolio documentation to guide the practice areas in which they would assess students, albeit in a basic, high superficial, manner. However, although they used portfolio documents to inform and guide them, like ‘non-compliers’ and ‘minimalists’, ‘enthusiasts’ reserved the right to assess students as and how they thought fit. They used their own personal standards of judgment i.e. ‘I do try and make sure to meet regularly with students and that I do assess them in the key areas in the portfolio documentation Like communication skills with patients, or their ability to use clinical protocols when formulating a diagnosis…But at the end of the day what I am most concerned with is satisfying in my own mind that they are basically competent to do the job as I see it, not with filling in the portfolio paperwork. Don’t get me wrong. I like the portfolio approach. It’s just that, well, I think if you talk to my colleagues they will tell you much the same thing. That it does tend to be too prescriptive and tell you want to do assessment wise with a student or a junior (doctor)…It’s like they (the medical school and deanery) are trying to remove the need for you to exercise your personal judgment by giving you a form to fill in, in a set way, using prescriptive criteria. And I think you can’t expect us as experienced clinicians to agree with that approach’ (Dr White, General Practitioner).

In spite of the presence of relatively minor differences between the three groups, all forty-six interviewees self-reported that they adopted a stance of ‘paperwork compliance’ towards portfolio appraisal. Furthermore, they held that the medical school and postgraduate deanery were aware that supervisors were ‘handling student assessment in our own way as we have always done’ (Dr Blue, Surgeon). Supervisors believed that medical school and postgraduate deanery staff left them alone as they trusted their judgments. While there was also the fact that supervisors are not directly employed by, or accountable to, medical schools and postgraduate deaneries.As Dr Purple (Surgeon) points out:

‘It’s my job to look after students when they are here and report back about how they have got on. That is all. Don’t get me wrong. I like teaching them. But I don’t really care if the medical school or the deanery are happy or not with the way I have filled out one of their forms’.

Interviewees accounts of how they supervised and assessed students, and the role the portfolio plays within this, are just that: accounts. It was not possible to substantiate whether or not they behaved in the way they said they did when supervising students or junior doctors. Nevertheless, the doctors interviewed provided corroborating accounts independently from each other (Bryman 2004). This inevitably lends some weight to the validity and reliability of the key finding of this research: a trainee’s portfolio played a superficial role in helping supervisors, first, decide what work tasks an appraisee should undertake and be assessed in, and second, form an opinion about the level of technical proficiency possessed by an appraisee about these tasks. Dr Silver (General Practitioner) makes this point succinctly:

‘I know that in many ways medical education is a different beast these days; is more structured, more ‘best evidenced’ and more standard driven. I certainly think that is what the likes of the GMC and the medical school want you to believe that. And on the surface it does look different. But beneath it is exactly as it was when I was a junior…If you want to get on and pass a placement you do as your consultant says. And you do what they tell you to do in the way they want you to do it, or else you end up in trouble. And I don’t see anything wrong with that. After all, they are here to learn’.

The need for trainees to conform to their consultant’s expectations of them, alongside the need for trainees to maximize the ‘clinical experience’ obtained during a placement, was used repeatedly by interviewees to justify the view that portfolio appraisal for trainees was essentially a ‘form filling’ exercise. Given the nature of clinical training and work, it was seen as ‘natural’ by interviewees for portfolio paperwork to be ignored or paid superficial regard. This attitude is perhaps to be expected, after all clinical placements are the sites where ‘clinical acumen’ is developed and tested (Atkinson 1995). Indeed, although supervisors may ask medical and nursing staff their thoughts on a student, they preferred to ‘see for myself what they can do in clinic, on the ward and in theatre, before I make a decision on how good they are’ (Dr Red, Surgeon). In short, interviewees ‘trumped’, with their personal ‘clinical acumen’, the formal ‘technicality’ bound up with portfolio based performance appraisal documentation. This included the requirement to consult other individuals concerning a trainee. This approach was further justified in the mind of interviewees by the fact that it is they, not their clinical colleagues, who must finally ‘sign off’ a trainee as ‘fit to practice’. While the heavy emphasis placed upon gaining clinical experience also seemed to lead trainees to be complicit in the adoption of a stance of ‘paperwork compliance’, on behalf of interviewees, toward portfolio based performance appraisal. With interviewees, like Dr Black (Physician) below, frequently reported that trainees also viewed portfolios as ‘paper chasing exercises’:

‘All the students I meet complain about the amount of form filling involved in maintaining a portfolio, so I don’t think it really matters to them how it is done as they want to get on and learn, not sit around and fill in and discuss portfolio forms’.

In spite of the recent rhetoric of medical elites (i.e. GMC 2003, BMA 2005) today’s trainees are no more ‘stakeholders’ and ‘equal partners’ in their education than their predecessors. According to interviewees, trainees must keep their mouths shut and do as they are told if they want to ‘get on’ during placements and gain a good reference in order to climb up the career ladder. The following comments from Dr Bronze (Physician) reinforce this view:

‘I know with things like the portfolio you are meant to ask them (junior doctors) what they want to do and achieve. And you do pay lip service to that. As you are trying to get to know then so you can mentor them along a bit, and knowing their career aspirations and the like helps you do that. But, come on, who is the consultant here? I don’t think you really should expect that they will have a say over what they should do. They are here to learn from me and that means they do as I say…(and) that’s how they get a good reference at the end from me’.

Similar to Annual Appraisal, a general preference amongst interviewees was to deal with underperformance through ‘informal mechanisms’. While as the following comments from Dr Green (General Practitioner) reinforce, interviewees also preferred not to report a student or junior if they could help it:

‘I do tend to let them off with minor things like poor attendance or if they talk down to the support staff, which can happen… I’ll have a word with them about things like that even though it may say that everything is OK in the portfolio…(because) I’m not going to mention things to the medical school or deanery unless I really need to…I just think they deserve the benefit of the doubt We all do I think (laughs) But particularly them as they are still learning the ropes.(So) when there is a problem I just have words with them, and I think that is most appropriate in most cases’.

Furthermore, when they did report a student or junior they preferred to bypass the formal referral documentation held within a trainee’s portfolio, as noted by Dr Orange (Physician):

‘There was one lad who was a real problem. His practice skills were diabolical He really needed intensive support. To tell you the truth I don’t know how he managed to pass medical school. Anyway, the upshot was that I did refer him. But I didn’t use the portfolio documentation. Nothing was written down on paper. I just phoned the deanery and we arranged some extra skills training for him. I thought that the best way of doing it really, without making a fuss or bringing too much attention to it.’

The comments from Dr Orange and Dr Green reinforce how doctors may view portfolio based performance appraisal as a ‘paper chasing’ exercise, but are wary of processes which formally record poor performance. This is in no small part due to the high level of ‘indetermination’ and ‘uncertainty’ involved in medical work (Atkinson 1995).

Two points need to be made about these findings. First concerns the issue of supervisors reporting that they were reluctant to refer underperforming trainees for formal remedial action by medical elites. This does not mean that the ‘informal processes’ they used to remedy this situation were inadequate or failed to improve a trainee’s performance. Furthermore, it must be admitted that it is impossible to know exactly what happened when underperformance was identified. Consequently, the data obtained should not be taken to indicate that underperforming, and so potentially dangerous, medical students and junior doctors are passing assessments of their ‘fitness to practice’. Interviewee’s accounts do not provide information on the prevalence of underperformance amongst trainees. This was not the purpose of this research. Yet their accounts do reinforce the notion that there is reluctance to formally report such incidents due to the existence of cultural barriers within the medical club to ‘incident reporting’ (Waring 2005). Quite possibly as a result of general acceptance between club members that there is a potential for clinical error to occur no matter how competent an individual doctor is, so it is necessary to ‘close ranks’ and deal with such matters ‘of the record’ as far as possible (Stacey 2000).

Second, findings should not be interpreted to imply that trainee’s clinical placements, or their general level of supervision and clinical teaching, were not of the highest quality. The quality of clinical education received should not be judged by the fact that supervisor’s accountability for their actions and judgments to medical elites possesses a ‘ritual quality’, or that they adopt a stance of ‘paperwork compliance’ towards trainee performance appraisal. It was clear from interviewees that they demanded the highest standards from students and tested them in a rigorous manner. However, they did so in their own personal idiosyncratic way, instead of adopting the more rigid approach required by portfolio appraisal.

Yet doctor’s accounts do reinforce the fact that there is good reason to doubt the legitimacy of portfolio based performance appraisal as an accurate administrative record and governance tool which promotes medicines ‘new professionalism’, through supporting individual and organisational accountability in the training and regulatory arena. Certainly, medicines ‘new markers of trust’ appear less than trustworthy (Kuhlmann 2006b). In conclusion, the central finding of this empirical research was that portfolio based performance appraisal might make ‘rank and file’ doctors ‘bureaucratically accountable’ to medical elites, for they must now record their decisions and the reasons behind them, but nevertheless, this accountability possesses a ‘ritual quality’, as illustrated by interviewee’s adoption of ‘paperwork compliance’ towards portfolio appraisal for trainees. Two contextual factors sustain ‘paperwork compliance’ amongst interviewees: ‘structural’ and ‘ideological’.

Supporting ‘Paperwork Compliance’: The Structural Factor

The factor themed as ‘structural’ is defined as such because it comes about due to the structural relationship which currently exists between the hospitals and general practice surgeries that act as clinical education sites, and the medical schools and postgraduate deaneries who are responsible for medical training. It is estimated they are some 16,000 NHS clinical staff involved in some way with the teaching of medical students and junior doctors (Bligh 2001). The vast majority of these are not directly employed by medical schools or postgraduate deaneries. These include doctors similar to those interviewed for this research who act as clinical placement supervisors. The arrangements for allocating clinical placement supervisors have traditionally been ‘ad hoc’ and decided ‘in-house’ within a clinical specialty. The notion was ‘somebody has to do it’. Furthermore, this supervision was arranged outside of the direct control of medical schools or postgraduate deaneries (Stewart 2002).In the context of clinical education, this reinforced the right for specialties to organise their own clinical teaching arrangements (Sinclair 1997).

Clinical teaching attracts SIFT money (Service Increment For Teaching) (Irby 1994). A doctor’s hospital department or GP practice, receives SIFT money for the clinical supervision and teaching they provide. This means that unlike their peers working in the medical school classroom, or medical school management, individual practitioners are not personally contractually accountable to the medical school or postgraduate deanery (Lowry 1993, Bligh 2001). Rather, they remain NHS employees. The lack of formal contractual accountability of clinical placement supervisors to medical elites raises four key issues. First, although they are not directly employed by the medical school or postgraduate deanery, senior doctors have a professional obligation to teach and ‘pass on the science and the art’ to the next generation of clinicians (Bligh 1998, Irvine 2003). However, there is little formal recognition in doctor’s NHS contracts for the educational work they do. Contractually, there is little time allocated for educational activities, and no personal reward, either monetary or in terms of status and recognition, for what is done (Sinclair 1997). Furthermore, doctor’s NHS contracts actually place teaching duties in formal competition with other non-clinical responsibilities; including audit, research and continuing professional development (Stewart 2002). It was not uncommon for interviewees to complain about the amount of paperwork involved in portfolio appraisal due to service pressures and other interests and responsibilities that placed demands upon their time. Second, and related to this, educational activities have traditionally been seen within medicine as coming a poor third behind clinical work and research. It has been argued that there is a competing triad of education, research and service within medicine, with medical education coming last (Elston, 1991, Bligh 2001). It needs to be remembered that trainees’ clinical placements are first working environments, and a doctor’s priority remains the care of his patients (Lowry 1993, Seabrook 2004). Additionally, traditionally clinical research has taken precedence over educational activities because it offers opportunities for prestige and career enhancement (Leinster 2003). This leads into the third issue. The relegation of education behind service and research commitments has meant that doctors receive little formal preparation and training in their supervisory role, or in general teaching and assessment techniques (Bligh 2001, Irvine 2003). Only six interviewees said they had completed one or more training courses run by the Royal Colleges on teaching and learning in medicine. None had participated on a formal training programme in performance appraisal and the role of appraiser. None possessed a recognised educational qualification. Furthermore, they had not received formal training from medical elites in the portfolio systems they were meant to be using to assess trainees. This point is illustrated by the following interview extract from Dr Brown (General Practitioner):

‘No there was no training programme or general introduction to the portfolio. Obviously I did receive information about what changes were going to be introduced… (and) I think that is all I needed really’.

Doctors were expected to supervise and assess trainees using a portfolio system without receiving any formal training. However, they were not concerned that they had not received such training. This general lack of formal training in educational theory and practice reflects a cultural tendency within the medical club to equate clinical expertise with the ability to teach (Lowry 1993). Recent reforms in medical training have seen the introduction of new educational methods, such as problem based learning and portfolio learning, which bring with them certain procedural and technical requirements. It is arguable that these methods require a doctor receive formal training in their implementation (Leinster 2003). This leads into the fourth issue. It is arguable that the successful implementation of new educational methods such as portfolio appraisal within medicine require clinical teachers possess a closer, more supportive and ‘managed’ relationship with training providers, such as medical schools, than they currently do (Bligh 2001). Yet, because of their current lack of contractual obligation to the medical school and postgraduate deaneries, interviewees’ felt remote from the medical school and the postgraduate deanery. They did not feel they possessed a close and supportive relationship with either of them. Here Dr White (General Practitioner) reflected the general experience of interviewees when he said: ‘I don’t really have much contact with the medical school. When I do it mostly involves speaking to the administrative support staff…who I must say are good at their job and very efficient’. Indeed, Dr White’s comments go some way to explain why interviewees did not feel that their supervision of trainees was recognised and valued by the medical school or the postgraduate deanery. As Dr Lilac (General Practitioner) notes: ‘No what we do as supervisors isn’t really valued I think…Despite your contribution you don’t as a clinical teacher get to have much of a say over what happens there. But I suppose they don’t have much say over what happens here either (laughs)’.

In summary, the structural relationship between clinical education sites and educational elites such as medical schools highlights four key issues. These are:

1. Lack of accountability for clinical supervisors to medical school and postgraduate deanery management, as well as a lack of adequate recognition and reward for educational duties via NHS employment contract

2. Lack of priority for educational duties compared to service delivery needs and clinical research activities

3. Lack of formal training for clinical supervisors in educational theory and practice, including specific educational methods such as portfolio appraisal

4. Lack of consultation and inclusion, by medical schools and postgraduate deaneries, of clinical supervisor’s views concerning the organisation and delivery of undergraduate and postgraduate medical education

Taken together these four issues form the contextual backdrop against which interviewees adopted ‘paperwork compliance’ towards portfolio based performance appraisal. They reinforced the ‘ritual quality’ of rank and file doctor’s ‘bureaucratic accountability’ to medical elites. Due to recent challenges to the principle of medical self-regulation, medical elites involved in regulation of medical education utilise transparent quality assurance mechanisms, such as ‘best evidenced’ performance standards, to enhance institutional and individual accountability. Nevertheless, the four issues highlighted by the structural factor show that recent educational reforms for trainees, specifically Tomorrows Doctors (GMC 1993 2002 2003) and Modernising Medical Careers (BMA 2005), appear to have failed to address key human resource issues. Even though these have a powerful negating influence upon attempts to enhance an individual doctor’s accountability for their educational activities past a superficial level.

Next post:

Previous post: