TY - JOUR AU - Quick, Oliver AB - I. INTRODUCTION Ever since its emergence as a unitary and respected profession in the second half of the 19th century, the medical profession has endured criticism. This has ranged from sceptical blasts against a ‘conspiracy against the laity'1 to more moderate misgivings about monopolisation2 and the inadequacies of self-regulation.3 Although it is now almost 30 years since Ivan Illich's warnings about medical harm,4 a series of recent disasters have acted as powerful and graphic reminders. The sad and shocking stories of high mortality rates following paediatric heart surgery,5 the retention of human tissue without consent6 and the mass murders perpetrated by Harold Shipman7 have amply demonstrated different guises of possible medical harm. This article focuses on unintentional harm and medical error. Public knowledge about medical error has increased markedly in recent times. Crucially, errors are no longer closeted in the private professional domain. This article explains the outing of medical error and considers its implications for the medical profession. Concern about errors and public safety falls within the broader themes of risk and trust in experts and expert systems. Risk is a multifaceted concept with obvious relevance to the medical setting. However, the exposure of errors to a critical public audience presents problems for the professional control of risk. Risk is closely related to the notion of trust: after all, to trust is to risk.8 Compared with the much theorised notion of risk, trust has escaped the same level of sustained scrutiny. Trust is integral to both medicine and the professional dominance thesis, which remains the most widely accepted way of conceptualising the medical profession.9 This article considers the relationship between awareness about errors and the notion of trust; in particular, the capacity of errors for eroding trust and undermining the professional dominance model. There has been a paradigm shift in terms of thinking about errors. In the aftermath of disasters, the lens of responsibility is being refocused away from people and towards (work) places. Institutions not individuals, processes rather than persons are becoming the focus of investigation. The search for scapegoats is beginning to look crass and ineffective. This is reflected in the formal responses to these events, such as public inquiries, which now routinely focus on system responsibility. Whilst systems analysis has obvious merits it also raises important and unresolved questions. In particular, what are the risks of this shift towards systems thinking? What are the implications for individual professional responsibility? Will the commitment to systems responsibility be meaningful in practice? First, however, we must sketch the contention and connotation of different descriptions of error episodes and appreciate the true toll of the error problem in medicine. II. ERRORS A. Language and Meaning Errors are, as Everrett C. Hughes explained, a predictable property of work. In every occupation there is a calculus of the probability of making mistakes, and a certain amount of error remains normal and routine.10 However, despite their inevitability, Paget was able to remark in 1988 that ‘Mistakes are a curiously neglected phenomena of study'11 and in 1999, Vaughan reminded us that ‘the sociology of mistake is in its infancy’.12 Nevertheless, public concern about the costs of human error manifested in a series of disasters has, in part, provided a renewed impetus in the field of human errors.13 This reawakened interest in the study of mistakes is also evident in medicine14 and has been intensified by a number of high-profile medical disasters, such as the Bristol heart surgery affair.15 Error is an essentially contested concept, particularly in complex and uncertain fields such as medicine. Much lies between the two extremes of blame-free accident and deliberate harm, and this is reflected by the myriad terms competing to describe the phenomenon of error in medicine: accidents, mishaps, mistakes, errors, negligence, failures, incompetence, misconduct, malpractice, deficient or substandard care, adverse or untoward events and the concept of iatrogenic harm all appear in the literature. In addition, particularly serious incidents may warrant the label disaster.16 The different meanings and connotations of these various terms are not without significance. ‘Accident’ conveys a rather neutral, blame-free meaning, unattached to notions of responsibility and liability. The traditional conception of a pure accident is an unmotivated unforeseen event, distant from wilful damage and neglect.17 Accidents are regarded as matters of ‘fate’ which excuse participants from censure. With greater knowledge about the history of such events, the notion of an innocent unpredictable ‘accident’ is increasingly being rejected. Reflecting this, the term has been abandoned in other settings, such as traffic safety,18 and the British Medical Journal has gone as far as banning it from its pages.19 Adverse events are described as injuries resulting from medical intervention, not the underlying condition of the patient. According to this definition, they are unexpected yet avoidable. The terms mistake and error imply that there has been some unsound or inappropriate behaviour. In general usage, mistake or error signals a wrong act, a blunder or a botch-up caused by some failure or inattention and, thus, conveys a negative judgemental meaning. Similarly, negligence carries with it ‘at least an innuendo of moral blame’.20 Clearly, certain errors may be classified as negligent.21 The labelling of such episodes as accidents or errors is more than a semantic quibble. The use of particular language and the accompanying different meanings and connotations may contribute to the moulding of particular understandings and responses. As Wells explains in the context of corporate risk taking, ‘If they are called accidents then they are less likely to be seen as potentially unlawful homicides; if they were seen as potentially unlawful homicides they would be less likely to be called accidents.’22 This is equally apposite for incidents of medical mistakes. Arguably, the term mishap falls somewhere between blame-free accidents and blameworthy errors. Neither innocuous nor inflammatory, perhaps this term is the most appealing to both public and profession. This article refers to error as a generic term encompassing all failures to achieve intended outcomes that cannot be explained by chance.23 B. Facts and Figures: The True Toll of Error A number of US studies first suggested that the true toll of medical error was substantial. In 1964, Schimmel24 found that 20 per cent of patients admitted to a university hospital medical service suffered iatrogenic illness and in 1981 Steel et al.25 found a higher figure of 36 per cent. Larger scale population studies provided a better assessment of the nature and frequency of error. The first of these, the California Medical Association's Medical Insurance Feasibility Study, estimated that 4.7 per cent of hospital admissions led to injury, with 0.8 per cent being due to negligence.26 The Harvard Medical Practice study, by far the most comprehensive and sophisticated study of the incidence of adverse events and negligence to date, found similar results.27 The study analysed a random sample of over 31,000 patient records from 51 New York State hospitals in 1984. It identified 1,133 (3.7 per cent) adverse events caused by medical treatment. The authors calculated that of the 2,671,863 discharged patients from acute-care hospitals in New York State in 1984, there were 98,609 adverse events. In 27.6 per cent of these adverse events the injury was due to negligence. The study thus concluded that there was a substantial amount of injury to patients from medical mismanagement and that many of these resulted from substandard care. Using figures from the Harvard study, Leape calculated that if the aviation industry had as many fatalities as the hospital sector in the US, this would equate to three jumbo-jet crashes every two days.28 According to US estimates, a person faces a one in 200 risk of error in hospital compared with one in two million in an airplane.29 It may surprise many that medical error accounts annually for more deaths than motor vehicle accidents, breast cancer or AIDS.30 In short, it represents a serious public health problem. Until recently, only one major UK study had investigated the incidence of medical errors.31 The Brighton study reviewed over 15,000 patient admissions to a hospital between 1990 and 1992 in 12 specialities. Twenty-one per cent of incidents subject to peer review were judged to be avoidable. In 2000, a report of an expert group chaired by the Chief Medical Officer concluded that NHS reporting and information systems only provided a ‘patchy and incomplete picture of the scale and nature of the problem of serious failures in health care’.32 The report drew on a recent small-scale pilot study of adverse events in hospitalised patients in the UK, in which 10.8 per cent of patients experienced an adverse event, with half of these judged to be preventable with ordinary standards of care. Extrapolated to the NHS in England, based on the 8.5 million in-patient episodes per annum, it is estimated that there are 850,000 adverse events each year in hospitals, with 70,000 being fatal. The financial cost in terms of additional bed-days is calculated at up to two billion pounds.33 A more recent survey conducted by the National Audit Office suggests that there were about 980,000 reported incidents and near misses in 2004–5, with around half of these classified as avoidable.34 The fact that not all incidents lead to adverse outcomes, and that healthcare professionals have been reluctant to report each other's mistakes, suggests that the true toll of medical error is even higher. C. Skeletons in the Closet Despite their inevitability and ubiquity, information about error has been slow to emerge. Compared with other industries, error in medicine has remained largely under-researched. Clearly, the ways in which such events occur partly explains this. Isolated incidents occurring one at a time in different locations, compared with, for instance, a huge jumbo jet crash, are less apparent, easier to conceal and less susceptible to scrutiny. A persistently incompetent physician who quietly and unintentionally harms single patients is harder to pin down than the errant pilot, train driver or ship's captain. As Bogner explains, in ‘medical care, sick people are made sicker or die from error, one person at a time. These singular deaths are not newsworthy and hence they do not evoke public concern’.35 There is, as Hilfiker explains, ‘an injury here, a mistake there, an accident here, a death here’.36 However, the isolated, sporadic and ‘hidden’ nature of medical error only partly explains the lack of data and discussion. Experts have preferred to closet errors, fearing loss of trust and status should they ‘come out’. The traditional culture of medicine has been resistant to confronting error, with doctors being schooled in the unrealistic ideal of error-free practice.37 In 1976, Gorovitz and MacIntyre remarked that ‘No species of fallibility is more important or less understood than fallibility in medical practice’ and regretted the negative effects of the ‘old ethics’ of concealing error.38 Encouraging the notion of infallibility is intellectually dishonest, hypocritical and prevents lessons being learned. Even those adopting a sympathetic stance towards the profession identify it as the ‘foremost culprit in perpetuating the myth of professional infallibility’.39 Through dealing with error on a daily basis, doctors have developed a language and culture that normalises mistakes. Within the professional collegium, mistakes have been neutralised, rationalised and even sometimes trivialised. While all professions construct ‘vocabularies of realism’,40 this is particularly well developed in medicine where risk and fatality are normal. This normalization and even denial of mistakes has been demonstrated by a number of studies by American sociologists, notably those of Eliot Freidson,41 Charles Bosk,42 Marcia Millman43 and Marianne Paget.44 Marillyn Rosenthal's study has demonstrated that these findings are equally apposite to the UK.45 However, these studies were conducted some years ago and predated significant changes to the regulation of healthcare quality both in the USA and UK, thus, raising the possibility that the current position might be different. For example, the introduction of ‘clinical governance’ into the NHS as a way of ensuring corporate accountability for clinical as well as financial performance may have made a difference.46 In 1999, NHS Trusts were placed under a statutory duty of quality, which entails evaluating internal risk management and audit mechanisms as well as accounting externally to the Commission for Health Improvement.47 In seeking to manage medical work, clinical governance challenges clinical autonomy and discretion.48 Further studies will need to investigate whether such reforms have contributed to any cultural change in the medical profession, in particular the way that it deals with error. The exposure of errors reveals the character of medical work and further challenges the profession by undermining public trust. Closeting errors is, thus, important in sustaining professional dominance, as Hilfiker explains, admitting and apologising for a mistake ‘simply doesn't fit into the physician–patient relationship’.49 Public conceptions of doctors have also contributed to this unrealistic pursuit of human perfectibility; after all, we prefer to envisage an error-free rather than error-prone physician. However, as Everett C. Hughes well explained, whilst on one level we like to believe in the perfection of our doctors, we are generally quick to accuse them for their mistakes, whether real or supposed. This explains Hughes' contention that ‘we actually hire people to make our mistakes for us’.50 D. Coming Out However, medical errors no longer remain, as Freidson observed in 1970, ‘a form of private property’ of the physician.51 In short, errors have been outed. This outing has occurred during medicine's evolution from cosy private relationships to a diverse multi-disciplinary public health project.52 It has been a US-led process, with a group of practitioners confronting the traditional professional silence on the topic by openly recounting their own errors.53 Importantly, this opening up has started to reach beyond an internal professional audience with a number of accessible books written by doctors prepared to publicise their error (and horror) stories.54 In 1999, the US Institute of Medicine published a landmark report estimating that medical errors accounted for the lives of at least 44,000 and perhaps as many as 98,000 Americans each year. In calling for a safer health system, it recommended that the US Congress create a Center for Patient Safety, which would set goals for safety and evaluate methods for identifying and preventing errors.55 It also demanded a nationwide mandatory system of reporting medical errors and encouraged the further development of voluntary reporting initiatives.56 Significantly, the then President of the United States, Bill Clinton, endorsed plans for such a system.57 Medical errors had arrived onto the political agenda. Developments in the UK have largely followed this US model. In 1990, the editor of the British Medical Journal called for the study of the epidemiology of malpractice,58 although only progressive elements of high-risk specialisms, such as anaesthesia, surgery and obstetrics, took heed of this advice. Beyond this, the British scene has largely been reliant on the writings of two psychologists, James Reason59 and Charles Vincent.60 A number of key events reflected the increased attention to the problem of error. In 1999, following its inquiry into adverse events, the House of Commons Select Committee on Health called for a central, nationwide database of adverse incidents.61 In July 2000, a NHS Summit took place at 10 Downing Street, with medical errors on the agenda alongside waiting lists and resources.62 In 2001, the publication by the Department of Health of ‘An Organisation with a Memory’ (OWM) represented the most significant development in this commitment to tackling medical error. Poor information systems and an unquestioning culture were identified as major problems. This was followed by a conference organised by the British Medical Journal, leading to a special theme edition edited by Lucian Leape and Donald Berwick, two leading lights in the study of error. Although noting the acceleration of the error prevention ‘movement’, they remarked on the need, and the difficulties, of creating both a culture of safety and a culture whereby it is safe to admit error.63 It is worth remembering that these developments took place during the fallout from one of the biggest disasters in the history of the medical profession: the Bristol heart surgery scandal. Whilst a variety of negative episodes have cluttered the recent history of the medical profession, the sad story of unusually high mortality rates of two surgeons performing paediatric heart surgery at the Bristol Royal Infirmary merits special attention. Protracted and high-profile disciplinary hearings at the General Medical Council were followed by a full-scale public inquiry chaired by Professor Ian Kennedy.64 The intensity and hostility of the media coverage was unprecedented, with 184 published items in the five-week period around the GMC's decision, focusing on the crisis of trust.65 Bristol was undoubtedly a watershed in the history of the medical profession. Indeed, the editor of the British Medical Journal reflected by echoing the words of W.B. Yeats, that everything had now ‘changed, changed utterly’.66 One of the most important changes was the profile that it gave to the subject of medical error. Closeting errors became very difficult in the wake of Bristol. However, whilst Bristol was the catalyst, the seeds for regulatory reform were sown before Bristol came to public attention. Notably, the GMC in 1985 belatedly recognised that medical error could be included within its catch all charge of serious professional misconduct.67 Error and performance were formally recognised by the Medical (Professional Performance) Act 1995, which extended the GMC's jurisdiction into the territory of ‘Seriously Deficient Performance’.68 This shift in focus was further reflected in an amendment to section 1 of the Medical Act 1983, which now states that ‘The main objective of the GMC in exercising their function is to protect, promote and maintain the health and safety of the public.’ Following the recommendations of the Kennedy inquiry, the GMC has continued the process of reform with revised fitness to practise procedures separating the tasks of investigation and adjudication, as well as allowing for the Council to issue a ‘warning’ in cases which fall below the threshold of misconduct.69 However, given the scale of the problem of error coupled with the perception of a failing GMC, reforming self-regulation was never going to be sufficient and, thus, it has been accompanied by the creation of new agencies specifically designed to improve patient safety. In 2001, the National Patient Safety Agency (NPSA) was created to organise the nationwide collection and collation of reports, through a National Learning and Reporting System, with the aim of initiating preventative measures to improve safety. It seeks to encourage an open culture of reporting incidents and ‘near misses’ with a focus on the ‘how’ rather than the ‘who’.70 In 2002, the Council for Healthcare Regulatory Excellence (CHRE) was created as an overarching regulator to monitor the performance of individual healthcare regulators.71 It is equipped with significant powers, notably under section 27 to direct a regulator to make changes to its rules and under section 29 to refer ‘unduly lenient’ decisions of regulatory bodies to the High Court; the later justifying double jeopardy in the name of patient protection. Perhaps most significantly, the monitoring of quality in the NHS begun by the Commission for Healthcare Improvement in 1998 has been taken over by the Healthcare Commission with a brief to independently assess the performance of health services from a patient's perspective.72 That this is a somewhat confusing collection of overlapping bodies has been acknowledged by the Department of Health in its review of arm's length bodies,73 although the possibility of further reform was signalled by the Chief Medical Officer's decision to review patient safety mechanisms following the critical reports of the Shipman Inquiry.74 In particular, momentum for reforming the system of clinical negligence with an alternative to tort has gathered pace,75 culminating in the NHS Redress Bill 2005. The Bill establishes a NHS Redress Scheme which will enable the settlement of certain low-value claims arising after adverse incidents without the need for court proceedings.76 It remains to be seen whether these significant reforms will satisfy the demands for accountability and openness. III. ERRORS, RISK AND TRUST Risk is a much theorised and multifaceted concept. Recent writing on risk has taken its lead from the analysis of Beck and Giddens, which argues that we live in a ‘risk society’.77 Risk is about external danger and, as such, is by no means a new concept. However, according to Beck it requires radical rethinking in a changing world. Complex modern society, with all its advances in science and technology, manufactures additional risk. Furthermore, ‘manufactured risk’ is becoming increasingly imperceptible and has relevance on a global stage. Crucially, as Dingwall explains, ‘In a risk society, everyone is a potential victim … [and] the basis of solidarity shifts from need to anxiety.’78 Healthcare is an area replete with risk. Again, whilst there is nothing new about saying this, there has been increasing debate about the communication of risks. Knowledge about the risk of medical harm is an important aspect of this. Increased information about error, particularly the public outing of error, creates problems for a profession seeking to manage and confine risk ‘in house’. With more information to hand, we have become increasingly anxious about the relative risks associated with medical work. This begs the question: what do we mean by medical risk? Risk operates at a number of different and overlapping levels in the medical setting. It is possible to adopt a two-fold classification of risk into medical risk and professional risk. Medical risk refers to risks to patients and is meaningful at a number of levels: treatment risk; individual practitioner risk; institutional provider risk; and patient risk (what is professionally termed ‘case-mix’, i.e. the fact that some patients are sicker than others and less likely to benefit from treatment). Essentially, this is concerned with the risk of ‘things going wrong’. Professional risk refers to the risk of complaints and litigation and the consequent damage to trust at both individual and institutional levels. The task of interpreting, assessing and communicating risk is complicated and contested. Risk is not a static concept, and notions of risk are constantly being generated, filtered and amplified.79 The medical setting is perhaps one of the most obvious examples of this confusion about risk, with uncertainty often the only certainty80 in what is an ‘error-ridden’ activity.81 Risk is also related to the notion of trust and it is to this important concept that I now turn. Trust is the basis of social interaction and is necessary for daily life. As John Locke explained, we ‘live upon trust’.82 It pervades many and diverse situations, ‘from marriage to economic development, from buying a second-hand car to international affairs, from the minutiae of social life to the continuation of life on earth’.83 Trust is, as Baier reminds us, a ‘phenomenon we are so familiar with that we scarcely notice its presence and its variety’.84 Although the centrality of trust to different forms of interaction has drawn the attention of a wide-ranging academic audience, principally psychologists, sociologists and economists, it has largely escaped sustained analysis. As Gambetta remarked in 1988, ‘in the social sciences the importance of trust is often acknowledged but seldom examined’.85 Baier has also noted the ‘strange silence on the topic in the tradition of moral philosophy’.86 More recently, however, it has emerged as a topic in its own right and one of significance in a society increasingly aware of risk.87 Trust is an elusive and slippery term which has proved difficult to pin down. Broadly speaking, psychologists define trust in terms of personality traits, while sociologists and anthropologists perceive trust as culturally determined—by place and through time. Summarizing from an important collection of essays, Gambetta defines trust as the ‘subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action … and in a context in which it affects his own action’.88 For Giddens, trust is related to absence in time and space, that is, there would be no need to trust if activities were always visible. He defines trust as ‘confidence in the reliability of a person or system, regarding a given set of outcomes or events, where that confidence expresses a faith in the probity or love of another, or in the correctness of abstract principles’.89 Again, broadly speaking, trust involves expectations based on common values. To trust is thus to let go, to be vulnerable and deferent to another's competence and responsibility. The notion of trust is particularly important in medical settings. Permitting others to see, touch and treat our bodies, and share in our secrets involves the investment of a large amount of trust. Trust in professionals is based on technical competence, gained through long rigorous training, the mastery of specialised knowledge and skill, and the dedication to act responsibly in the patient's interests. Trust is related to appearances, experience, evidence and reputation. The traditional lack of (external) regulation was a mark of being trusted. However, the significance of trust extends beyond its role in the millions of everyday doctor–patient interactions. It represents an essential feature of professionalism. The growth of public trust in the profession allowed the profession to define the scope of medical work, increase its power and legitimate its autonomy. In particular, the professional dominance thesis so fiercely defended by Eliot Freidson is reliant on the survival of this trust.90 Diminishing trust thus alters the nature of professionalism and gives credence to arguments predicting the decline of professional autonomy.91 A. Trust and Information Trust is necessary in asymmetrical relationships with all their imbalances of knowledge and power. It is, thus, largely reliant on ignorance and uncertainty, and may be protected by secrecy. The stranglehold that doctors traditionally enjoyed over medical knowledge and information about the quality of healthcare has been significant in sustaining trust. Control over the communication of risks also allows the profession to maintain trust. Giddens refers to the notion of ‘acceptable’ risk as sustaining trust and notes the example of the air industry demonstrating how statistically low the risks of air travel are.92 Similarly, medical professionals are in a position to sustain trust by not fully unpacking the meaning of risk in the sense of presenting general as opposed to personalised risk information.93 Even when things go wrong, the professional is well placed to retain trust by guarding, filtering and even concealing information about error, although it is obviously difficult to determine the extent of this. The ‘facework commitments’ that Giddens argues are necessary for trust in persons are reliant on the demeanour of system representatives, in short, displays of trustworthiness, unflappability and reassurance.94 Relying on Goffman's concepts of ‘frontstage’ and ‘backstage’ performances, Giddens notes how controlling the threshold between the two is part of the essence of professionalism. Essentially, the profession has sought to keep mistakes backstage through the demeanour of infallibility and ‘business-as-usual’. As Giddens explains, ‘Patients are not likely to trust medical staff so implicitly if they have full knowledge of the mistakes which are made in the wards and on the operating table’.95 Admitting error ‘frontstage’ to a wide audience of critics attracts negative attention and damages the precious commodity of trust. Widespread lay knowledge of risks and errors therefore leads to an awareness of the limits of expertise and represents a ‘public relations’ problem for those seeking to maintain trust in expert systems.96 Whilst we have relied on the competence of practitioners, we have been largely bereft of information on outcomes and performance to establish this, relying somewhat on the clichéd and inadequate assertion, ‘trust me, I'm a doctor’. In this state of poor information, trust has largely been unchallenged and unconditional. However, this is changing. The availability of information about error is opening up previously guarded secrets to a public audience. The myth of universal competence, colloquialised in the expression ‘they are all the same’ is exposed by such mechanisms. With access to information about performance, there is no longer such uncertainty and ignorance, and the conditions for unquestioning trust begin to fade away. Symbolic constructions inform our perceptions of whom we ought to trust, and we tend, as Baier notes, to trust based on uniforms, badges and professional certificates.97 The medical profession has been particularly adept at constructing and sustaining the image of the all-knowing and infallible practitioner who should be trusted. The costume of the doctor-scientist, the white coat and stethoscope conveys expertise, authority and encourages deference. However, this is challenged by knowledge about the reality of medical error. The post-war Welfare State has been characterised by a list of demons, such as ‘scroungers’, ‘single parents’, ‘inadequate mothers’.98 Now in the medical setting, critical media coverage presents us with the ‘blunderer’ and the ‘butcher’. Whilst measuring media influence is problematic, it is difficult to deny any significance to the unprecedented coverage that constantly questions the extent to which we do, and ought to, trust. Certainly, further research is required to investigate the influential role of the media in shaping medico-legal debates.99 In her 2002 Reith lectures, Onora O'Neill remarked that discussion of declining trust is something of a cliché of our times.100 Whilst some deny the evidence for this loss of trust, it may also be something of an elusive search to somehow prove public distrust. Although empirical data from opinion polls suggest that trust in doctors remains high, public awareness of medical error and the complaints, litigation, prosecution, audit and inquiries that may follow contributes to the climate of distrust.101 However, to talk simply in terms of a decline of trust is to oversimplify the changes taking place in the dynamic between the individual patient, the State and the medical profession. We are unable to conclude with certainty that trust has eroded, but then perhaps we do not even need to do so given the prevailing perception that trust has been damaged. More realistically, it is the basis and nature of trust that has changed. Professionals are increasingly required to establish competence and integrity through audit results, performance indicators and low complication rates. We appear to be moving from a state of unconditional to conditional trust. In a climate of increasing scrutiny, trust is no longer assumed and must be earned. At the very least, as Salter points out, ‘Public trust in doctors is no longer a cultural given’.102 IV. UNDERSTANDING ERRORS A. Cause and Responsibility The literature on medical error has tended to revolve around the question of how they occur. The strongest theme to emerge is the tension between individual and organisational approaches to interpreting error. The former approach isolates individuals and pursues a strategy of blame and punishment. Operators at the sharp end, so-called ‘bad apples’, are held to account through a range of legal and disciplinary responses.103 Legal responses to medical error largely focus on individual responsibility. In the majority of clinical negligence claims arising out of a hospital setting, although proceedings are formally brought against Trusts on the basis of their vicarious liability, it is the conduct of one or more individuals that is under scrutiny. Whilst NHS institutions may be directly liable, for example, for failing to ensure a safe working environment or an adequate system of communication, this is rarely the central issue.104 Criminal law, in the form of manslaughter prosecutions following fatal error, also focuses on individuals.105 Corporate manslaughter charges, in this instance against hospital Trusts, remain a theoretical, as opposed to a practical, possibility.106 Inquiry reports into failures in the NHS have also traditionally been based on the model of individual blame. A passage from the Kennedy inquiry into the Bristol heart babies' affair puts it aptly: A serious failure occurs [in] the NHS. An Inquiry is set up. Months later a report is published. Almost always, the report singles out an individual or group who are held responsible. The individual is condemned. The NHS proceeds on its way, assuming that the matter is resolved: until the next failure.107 The narrow focus of these mechanisms is particularly disappointing given advancements in understanding human action, what Norman has termed the ‘psychology of everyday things’.108 Psychologists have long examined the way that human beings function in complex socio-technical organisations, and the cognitive processes that lie behind human successes and failures. Thus, we have Charles Perrow's ‘normal accident'109 and the ‘resident pathogen’ metaphor advanced by James Reason.110 Many have questioned where this unquenched (perhaps unquenchable) thirst for accountability is taking us, with Onora O'Neill one of many arguing that we are prescribing the wrong type of accountability and that we should look at systems as well as individuals.111 B. Changing Climate: Blame it on the System The model of individual blame is challenged by a systems analysis approach. This approach is based on the inevitability of human errors and the claim that all systems have a number of latent failures, generally management and communication problems. The more complex, interactive and opaque the system, the greater number of latent errors it is likely to contain.112 Two main arguments are advanced in favour of this approach: one practical and one philosophical. On a practical level, the individualization of blame suffocates error and prevents learning and safety enhancements. Honesty and openness are hard to find in a culture which seeks culprits and scapegoats. The ‘every error is a treasure’ approach is, thus, strategically important in terms of reducing errors.113 Philosophically, it is morally unacceptable to pin blame solely on individuals, and artificial to isolate them from their wider working environment and culture. A more humane approach looks beyond alleged individual failings to explore the broader context. Systems analysis is becoming increasingly fashionable as a way of understanding disasters in settings as diverse as transport, healthcare, agriculture and criminal justice. Inquiries now routinely broaden the lens to focus on organisations and their cultures and modes of communication. Perhaps the beginning of this trend can be traced to the official inquiry into the Herald of Free Enterprise disaster, where Sheen J. found ‘cardinal faults … higher up the company’.114 The systems analysis theme has been central in inquiry reports into the decade of disasters that followed. Inquiries into rail disasters (the Southall and Ladbroke Grove115 and Clapham inquiries116) the King's Cross fire,117 Piper Alpha118 and BSE119 have all contextualised the events under review, focusing on failing systems as opposed to flawed individuals. The two inquiries following the fall out from the decision to go to war with Iraq have also focused on systems. The Hutton inquiry into the events surrounding the death of Dr David Kelly looked at the system of government and the editorial system at the BBC.120 Most notably, the Butler report into the use of intelligence appears to eschew blame altogether.121 Inquiries following high-profile murder cases have focused on ‘institutional racism'122 and ‘data management’ problems in the police force and wider issues of criminal justice.123 The Kennedy inquiry into Bristol began by stating that ‘This is not an account about bad people’,124 whilst even the first report into an extreme case of individual fault, that of Harold Shipman, focused on the flawed systems which failed to prevent mass murder.125 Systems analysis is sometimes presented as a panacea to the problem of error. Given the organisational context within which errors occur, holding institutions responsible seems entirely appropriate. However, there are a number of unresolved questions which reflect confusion about the appropriate use of trust and responsibility here. The concept of trusting or distrusting healthcare organisations is problematic given that trust is an emotional concept more readily relevant to persons. Furthermore, the notion of trusting organisations is made even harder without adequate means of assessing their performance and, thus, holding them responsible. Mechanisms for holding healthcare organisations responsible for medical errors, and the broader issue of the quality of healthcare, have been slow to emerge, although there are signs of change. In relation to clinical negligence, the emergence of risk management initiatives offers a good example. The NHS Litigation Authority administers the Clinical Negligence Scheme for Trusts (CNST), which indemnifies NHS bodies in respect of clinical negligence and manages claims and litigation.126 The scheme sets out risk management standards for ensuring healthcare quality against which all Trusts are assessed for compliance. NHS Trusts have an economic incentive to develop systems which comply with these standards in order to be eligible for discounted contributions to the scheme.127 The aim is to create ‘organisations with memories’ which are able to learn from errors and reduce their future occurrence. In terms of accountability, compliance with these standards has been monitored by the Commission for Health Improvement, now the Healthcare Commission. The Commission, under the direction of Sir Ian Kennedy, is designed to focus on institutions rather than individuals, largely through conducting an ‘annual health check’ of all NHS organisations and publicising annual performance ratings.128 However, whilst organisations may be assessed against risk management standards and given performance ratings, it remains to be seen whether such ‘results’ end up being traced back to the individual level. For example, organisations may decide to respond to poor ratings by isolating so-called problem individuals and deploying internal disciplinary procedures; thus, applying the mindset of individual blame which these reforms have sought to challenge. Statistics evidencing the increasing number of suspended hospital doctors would seem to suggest that this reflects the current situation.129 In this way, it is possible that the model of individual blame will continue to be played out, albeit within the façade of a commitment to systems analysis and organisational responsibility. Research is required into the effect of these reforms ‘on the ground’ to investigate whether the commitment to system responsibility represents rhetoric or reality. The focus on systems also risks diluting the notion of individual professional responsibility that has been central to medical autonomy and accountability. As Dingwall reminds us, responsibility for risks becomes more elusive in modern society: ‘the interdependence of productive forces characteristic of modern societies dissolves personal responsibility into that of a diffuse “system”’.130 Beck is also critical of the consequences of systems thinking arguing that: causes dribble away into a general amalgam of agents and conditions, reactions and counter-reactions, which brings social certainty and popularity to the concept of system. This reveals in exemplary fashion the ethical significance of the system concept: one can do something and continue doing it without having to take personal responsibility for it.131 In the medical context, if blaming the system becomes the default response, to what extent will this shelter the incompetent or poor performer? Sir Donald Irvine, president of the GMC during the turbulent times of the Bristol affair, warned against this over-emphasis on the system which may mask individual failings.132 A recent example of this followed a surgeon's conviction for manslaughter with the judge remarking that: It was not your fault that you were allowed to go on operating, subject to restrictions, for another two years. Much of the evidence of these events was known at the time and the balance of the evidence was easily discoverable had it occurred to anyone making elementary inquiries.133 As comments such as this become a more common reaction to error, it is worth questioning whether this drift towards blaming others and organisations risks underplaying the ethics of individual conscience. V. CONCLUSION Medical error is an important issue of emotional, financial, health and political significance. Nevertheless, it has historically remained closeted within the private professional domain. Although medicine's potential for harm has never been secret, public debate about the problem of error has been somewhat muted. However, a series of high-profile disasters, particularly the Bristol heart surgery affair, amplified voices of discontent. This article has traced the public outing of error and considered its implications for the medical profession. Concerns about errors and public safety fall within the broader themes of risk and trust in experts and expert systems. Exposure of errors to a critical public audience presents problems for the professional control of risk and, in particular, the risk of diminishing trust. Heightened awareness of errors and their many harms, potentially undermines trust in medicine and challenges the professional dominance thesis. As our appreciation of the size of the problem has increased, so too has our understanding of the causes of error and the appropriate ways of responding. The traditional model of individual blame is increasingly challenged by a systems analysis. Scholars have long advocated the merits of broadening the investigatory lens beyond the narrow focus on individuals towards the more complex workings of institutions, environments and cultures. Encouragingly, the preference for this more mature and sophisticated approach is also reflected in the remit and recommendations formulated by a number of inquiries held into a diversity of disasters during the last decade. Whilst this is welcome, the systems analysis approach is sometimes presented as a panacea when there are important unresolved questions about responsibility and trust which merit further research. First, are we really witnessing the emergence of more collective forms of responsibility for medical errors? Does the apparent shift in focus from individual to institutional responsibility represent rhetoric or reality? Second, to what extent does this emphasis on the system risk diluting the notion of individual professional responsibility so central to professional dominance? Third, in what ways will we be able to trust or distrust organisations and hold them to account? Given that trust is an emotional concept best suited to people, what mechanisms will regulate our trust or distrust in organisations? And, finally, how far can the systems approach for medical errors be extended? Arguably, a broad approach to the exact meaning of ‘system’ will lead to uncomfortable questions, not just for the medical profession, but for the government of the day. If the new agencies established to ensure patient safety fail to reduce the problem of error, or are unable to engender the desired sense of system responsibility, perhaps we should look at the bigger system within which they operate. * Lecturer in Law, University of Bristol. With thanks to Dave Cowan, Michael Naughton, Ken Oliphant, Celia Wells and an anonymous reviewer for their helpful comments. 1 G.B. Shaw, Doctors’ Delusions Crude Criminology and Sham Education (Constable 1931) at 117–18. 2 J.L. Berlant, Profession and Monopoly: A Study of Medicine in the United States and Great Britain (University of California 1975). 3 M.M. Rosenthal, The Incompetent Doctor: Behind Closed Doors (Open University Press 1995); L. Mulcahy and J. Allsop, Regulating Medical Work: Formal and Informal Controls (Open University Press 1996). 4 I. Illich, Limits to Medicine: Medical Nemesis—The Expropriation of Health (Penguin 1977). 5 Kennedy (Chair), The Report of the Public Inquiry into Children's Heart Surgery at the Bristol Royal Infirmary 1984–1995: Learning from Bristol (Cmnd. 5207 (I) 2001). 6 See e.g. Redfern (Chair), The Royal Liverpool Children's Inquiry Report (HMSO 2001); Department of Health, Report of a Census of Organs and Tissues Retained by Pathology Services in England (Department of Health 2005). 7 See http://www.shipman-inquiry.org.uk/. 8 T.C. Earle and G.T. Cvetkovich, Social Trust: Toward a Cosmopolitan Society (Praeger 1995) at 107. 9 E. Freidson, Profession of Medicine: A Study of the Sociology of Applied Knowledge (Harper and Row 1970). 10 E.C. Hughes, Men and Their Work (Greenwood Press 1958). 11 M.A. Paget, The Unity of Mistakes: A Phenomenological Interpretation of Medical Work (Temple University Press 1988) at 59. 12 D. Vaughan, ‘The Dark Side of Organizations: Mistake, Misconduct, and Disaster’ in K. Cook and J. Hagan (eds) (1999) 25 Annual Review of Sociology 271–305 at 284. 13 J.T. Reason, Human Errors (Cambridge University Press 1990). 14 C. Vincent et al., Medical Accidents (Oxford University Press 1993); M.S. Bogner, Human Error in Medicine (Lawrence Erlbaum Associates 1994); C.A. Vincent and P. Bark, Clinical Risk Management (BMJ Publishing Group 1995); M.M. Rosenthal, L. Mulcahy and S. Lloyd-Bostock (eds), Medical Mishaps: Pieces of the Puzzle (Open University Press 1999); V.A. Sharpe and A.I. Faden, Medical Harm: Historical, Conceptual and Ethical Dimensions of Iatrogenic Illness (Cambridge University Press 1998); A. Merry and A. McCall-Smith, Errors, Medicine and the Law (Cambridge University Press 2001). 15 See O. Quick, ‘Disaster at Bristol: Explanations and Implications of a Tragedy’ (1999) 21 Journal of Social Welfare and Family Law 307. 16 B.A. Turner and N.F. Pidgeon, Man-Made Disasters, 2nd edn (Butterworth-Heinemann 1997) at 19. 17 J. Green, Risk and Misfortune: The Social Construction of Accidents (UCL Press 1997) at 2. 18 L. Evans, ‘Medical Accidents: No Such Thing?’ (1993) 307 British Medical Journal 1438. 19 R. Davis and B. Pless, ‘BMJ Bans “Accidents”’ (2001) 322 British Medical Journal 1320. 20 P. Devlin, Samples of Lawmaking (Oxford University Press 1962) at 100. 21 As Lord Edmund-Davies remarked in Whitehouse v. Jordan [1981] 1 All E.R. 267 at 276, ‘while some such errors may be completely consistent with the due exercise of professional skill, other acts or omissions in the course of exercising “clinical judgment” may be so glaringly below proper standards as to make a finding of negligence inevitable’. 22 C. Wells, Corporations and Criminal Responsibility (Oxford University Press 2001) at 12. 23 For further elaboration on error, see J.T. Reason, supra, n. 13 at 9. 24 E.M. Schimmel, ‘The Hazards of Hospitalisation’ (1964) 60 Annals of Internal Medicine 100. 25 K. Steel et al., ‘Iatrogenic Illness on a General Medical Service at a University Hospital’ (1981) 304 New England Journal of Medicine 638. 26 D.H. Mills, ‘Medical Insurance Feasibility Study’ (1978) 128 Western Journal of Medicine 360. 27 T.A. Brennan et al., ‘Incidence of Adverse Events and Negligence in Hospitalised Patients: Results of the Harvard Medical Practice Study I’ (1991) 324 New England Journal of Medicine 370. 28 L.L. Leape, ‘Error in Medicine’ in M.M. Rosenthal, L. Mulcahy and S. Lloyd-Bostock (eds), supra, n. 14 at 20–38. 29 J.H. Tanne, ‘AMA Moves to Tackle Medical Errors’ (1997) 315 British Medical Journal 967. 30 See L. Kohn, J. Corrigan and M. Donaldson (eds), To Err is Human: Building a Safer Health System (Washington: National Academy Press 1999) at 1. 31 K. Walshe and Y. Buttery, ‘Measuring the Impact of Audit and Quality Improvement Activities’ (1995) 2 Journal of the Association for Quality in Healthcare 138. 32 Department of Health, An Organisation with a Memory: Report of an Expert Group on Learning from Adverse Events in the NHS, Chaired by the Chief Medical Officer (HMSO 2000) at vii. 33 See C. Vincent, G. Neale and M. Woloshynowych, ‘Adverse Events in British Hospitals: Preliminary Retrospective Record Review’ (2001) 322 British Medical Journal 517. 34 National Audit Office, A Safer Place for Patients: Learning to Improve Patient Safety (National Audit Office 2005). 35 M.S. Bogner, ‘Introduction’ in M.S. Bogner (ed.), supra, n. 14 at 2. 36 H. van Cott, ‘Human Errors: Their Causes and Reduction’ in M.S. Bogner (ed.), supra, n. 14 at 56. 37 D. Hilfiker, ‘Facing our Mistakes’ (1984) 310 New England Journal of Medicine 118. 38 S. Gorovitz and A. MacIntyre, ‘Toward a Theory of Medical Fallibility’ (1976) 1 Journal of Medical Fallibility 51. 39 A. Merry and A. McCall-Smith, supra, n. 14 at 72. 40 J. Stelling and R. Bucher, ‘Vocabularies of Realism in Professional Socialization’ (1973) 7 Social Science and Medicine 661. 41 E. Freidson, Doctoring Together: A Study of Professional Social Control (Elsevier 1975). 42 C.L. Bosk, Forgive and Remember: Managing Medical Failure (University of Chicago Press 1979). 43 M. Millman, The Unkindest Cut: Life in the Backrooms of Medicine (Morrow Quill 1977). 44 M.A. Paget, supra, n. 11. 45 M.M. Rosenthal, supra, n. 3. 46 Department of Health, A First Class Service: Quality in the New NHS (Department of Health 1998). 47 Health Act 1999, s. 18. Note that the work of the Commission for Healthcare Improvement has been taken over by the Healthcare Commission: see section II D below. 48 See V. Harpwood, ‘The Manipulation of Medical Practice’ in M. Freeman and A. Lewis (eds), Current Legal Issues: Law and Medicine (Oxford University Press 2000) at 47–66, and H.T.O. Davies and R. Mannion, ‘Clinical Governance: Striking a Balance between Checking and Trusting’ in P.C. Smith (ed.), Reforming Markets in Healthcare—An Economic Perspective (Open University Press 1999). 49 D. Hilfiker, ‘Facing our Mistakes’ (1984) 310 New England Journal of Medicine 118–122 at 121. 50 E.C. Hughes, supra, n. 10 at 91. 51 E. Freidson, supra, n. 9 at 184. 52 See R. Porter, Blood and Guts: A Short History of Medicine (Penguin 2003) at 159. 53 D. Hilfiker, supra, n. 49; L.L. Leape, ‘A Systems Analysis Approach to Medical Error’ (1997) 3 Journal of Evaluation in Clinical Practice 213; D.M. Berwick ‘Continuous Improvement as an Ideal in Health Care’ (1989) 320 New England Journal of Medicine 53. 54 A. Gawande, Complications: A Surgeon's Notes on an Imperfect Science (Profile 2003); F. Huyler, The Blood of Strangers: True Stories from the Emergency Room (Fourth Estate 1999). 55 L. Kohn, J. Corrigan and M. Donaldson (eds), supra, n. 30 at 7. 56 Ibid. at 9. 57 F. Charatan, ‘Clinton Acts to Reduce Medical Mistakes’ (2000) 320 British Medical Journal 597. 58 R. Smith, ‘The Epidemiology of Malpractice’ (1990) 301 British Medical Journal 621. 59 J.T. Reason, supra, n. 13, and Managing the Risks of Organisational Accidents (Ashgate 1997). 60 C. Vincent et al., supra, n. 14, and ‘Risk, Safety and the Dark Side of Quality’ (1997) 314 British Medical Journal 1775. 61 House of Commons Select Committee on Health—6th Report, Procedures Related to Adverse Clinical Incidents and Outcomes in Medical Care (1999). 62 L. Ward, ‘Blair Gets Tough with Doctors’ The Guardian 5.6.00. 63 L. Leape and D. Berwick, ‘Safe Health Care: Are We Up To It?’ (2000) 320 British Medical Journal 725. 64 Supra, n. 5. 65 H.T.O. Davies and A.V. Shields, ‘Public Trust and Accountability for Clinical Performance: Lessons from the National Press Reportage of the Bristol Hearing’ (1999) 5 Journal of Evaluation in Clinical Practice 335. 66 R. Smith, ‘All Changed, Changed Utterly’ (1998) 316 British Medical Journal 1917. 67 In 1985 it acknowledged that it was ‘concerned with errors in diagnosis or treatment and with the kind of matters which give rise to action in the Civil Courts for negligence, only when the doctor's conduct in the case has involved such a disregard of his professional responsibility to patients or such a neglect of his professional duties as to raise a question of professional misconduct’ having previously explicitly excluded this from its remit. General Medical Council Professional Conduct and Discipline: Fitness to Practise (General Medical Council 1985) 10, para. 38. 68 The Act came into force on 1 July 1997. The General Medical Council (Professional Performance) Rules Order of Council 1997 (S.I. 1997 No. 1529). 69 General Medical Council, ‘Making Fitness to Practise Fit for Practice’ (General Medical Council 2004). 70 http://www.npsa.nhs.uk/web/display?contentId=2656. 71 Ss 25–9 National Health Service Reform and Health Care Professions Act 2002. See also http://www.chre.org.uk/. It was initially called the Council for the Regulation of Health Professionals. 72 It was established under the Health and Social Care (Community Health and Standards) Act 2003 and takes over the work of the Commission for Healthcare Audit and Inspection (CHAI), which was formerly known as the Commission for Healthcare Improvement (CHIMP). 73 Department of Health, ‘Reconfiguring the Department of Health's Arm's Length Bodies’ (Department of Health 2004). 74 ‘CMO Review of Medical Revalidation: Call for Ideas’ (Department of Health 2005). 75 Department of Health, ‘Making Amends: A Consultation Paper Setting Out Proposals for Reforming the Approach to Clinical Negligence in the NHS. A Report by the Chief Medical Officer’ (Department of Health 2003). 76 The Bill was introduced into the House of Lords on 12 October 2005; http://www.publications.parliament.uk/pa/ld200506/ldbills/022/2006022.htm. 77 U. Beck, Risk Society: Towards a New Modernity (Sage 1992); A. Giddens, The Consequences of Modernity (Polity 1990). 78 R. Dingwall, ‘Risk Society: The Cult of Theory and the Millennium?’ (1999) 33 Social Policy and Administration 474 at 478–9. 79 N. Pidgeon, R.E. Kasperson and P. Slovic (eds), The Social Amplification of Risk (Cambridge University Press 2003). 80 R. Fox, ‘Training for Uncertainty’ in R.K. Merton, G.G. Reader and P.L. Kendall (eds), The Student Physician (Harvard University Press 1957) at 207–41. 81 M.A. Paget, supra, n. 11. 82 E.S. de Beer (ed.), The Correspondence of John Locke (Clarendon Press 1976) I, 123 (letter 81). 83 D. Gambetta (ed.), Trust: Making and Breaking Cooperative Relations (Blackwell 1988), ‘Foreword’. 84 A.C. Baier, Moral Prejudices: Essays on Ethics (Harvard University Press 1994) at 98. 85 D. Gambetta (ed.), supra, n. 83. 86 A.C. Baier, supra, n. 84 at 96. 87 T.C. Earle and G.T. Cvetkovich, Social Trust: Toward a Cosmopolitan Society (Praeger 1995); E.R. DuBose, The Illusion of Trust: Toward a Medical Theological Ethics in the Postmodern Age (Kluwer Academic 1995); F. Fukuyama, Trust: The Social Virtues and the Creation of Prosperity (Free Press 1995); A.B. Seligman, The Problem of Trust (Princeton University Press 1997); A. Coulson (ed.), Trust and Contracts: Relationships in Local Government, Health and Public Services (Policy Press 1998); O. O'Neill, A Question of Trust (2002 BBC Reith Lectures, Cambridge 2002); O.O'Neill, Autonomy and Trust in Bioethics (Cambridge University Press 2002), R. Hardin, Trust and Trustworthiness (Russell Sage Foundation 2004). 88 D. Gambetta, ‘Can we Trust Trust?’ in D. Gambetta (ed.), supra, n. 83 at 213–37 (emphasis in original). 89 A. Giddens, supra, n. 77 at 34. 90 See E. Freidson, supra, n. 9. 91 M.R. Haug, ‘A Re-Examination of the Hypothesis of Physician Deprofessionalisation’ (1988) 66 Millbank Quarterly 48. 92 A. Giddens, supra n. 77 at 35. 93 The Bristol heart surgery affair is a graphic example of this where the general risk associated with the procedure was much lower than the actual personalised risk relevant to the surgeons. 94 A. Giddens, op.cit. supra, n. 77 at 85 (emphasis in original). 95 Ibid. at 86. 96 Ibid. at 130. 97 A.C. Baier, supra, n. 84 at 159. 98 J. Newman, ‘The Dynamics of Trust’ in A. Coulson (ed.), supra, n. 87 at 38. 99 See M. Brazier, ‘Editorial: Times of Change?’ (2005) 13 Med. L. Rev. 1. 100 O.O'Neill, supra, n. 87. 101 Although, arguably, such mechanisms for openly investigating errors are required to try and rebuild (italicise for emphasis) trust. 102 B. Salter, Medical Regulation and Public Trust: An International Review (King's Fund 2000) at 9. 103 D.M. Berwick, supra, n. 53. 104 Reported examples include Wilsher v. Essex A.H.A. [1986] 3 A11 E.R. 801, C.A.; [1988] 2 W.L.R. 557, H.L.; and Bull v. Devon Area Health Authority [1993] 4 Med L.R. 117. 105 R. Ferner, ‘Medication Errors that have Led to Manslaughter Charges’ (2000) 321 British Medical Journal 1212. And see R v. Misra and Srivistava [2004] E.W.C.A. Crim. 2375. 106 M. Childs, ‘Medical Manslaughter and Corporate Liability’ (1999) 19 Legal Studies 316. Plans for a new offence of corporate killing continue to remain at draft stage. 107 Kennedy (Chair), supra, n. 5 at 259 para. 19. 108 D.A. Norman, The Psychology of Everyday Things (Basic Books 1988). 109 C. Perrow, Normal Accidents: Living with High Risk Technologies (Basic Books 1984). 110 J.T. Reason, supra, n. 13. 111 O.O'Neill, ‘A Question of Trust’ supra, n. 87. 112 J.T. Reason, supra, n. 13 at 198. 113 N. McIntyre and K. Popper, ‘The Critical Attitude in Medicine: The Need for a New Ethics’ (1983) 287 British Medical Journal 1919. 114 Department of Transport, MV Herald of Free Enterprise. Report of Court No. 8074. Formal Investigation. (HMSO 1987) para. 14.1. 115 Cullen (Chair), The Ladbroke Grove Rail Inquiry: Part 1 Report (HMSO 2001). 116 Hidden (Chair), Investigation into the Clapham Junction Railway Accident (Cmnd. 820 1989). 117 Fennel (Chair), Investigation into the King's Cross Underground Fire (Cmnd. 499 1988). 118 Cullen (Chair), The Public Inquiry into the Piper Alpha Disaster (Cmnd. 1310 1990). 119 Phillips (Chair), The Inquiry into BSE and Variant CJD in the United Kingdom (2000). 120 Hutton (Chair) Report of the Inquiry into the Circumstances Surrounding the Death of Dr David Kelly CMG (2004). 121 As Robin Cook mockingly remarked: ‘This must be the most embarrassing failure in the history of British intelligence. Yet according to Lord Butler, no-one is to blame. Everyone behaved perfectly properly and nobody made a mistake. Poor things, they were let down by the system and institutional weaknesses.’ R. Cook, ‘Britain's Worst Intelligence Failure, and Lord Butler Says No One is to Blame’ The Independent 15.7.04, p. 21. 122 The Stephen Lawrence Inquiry (Cmnd. 4262-I 1999). 123 Bichard (Chair), An Independent Inquiry arising from the Soham Murders (2004). 124 Kennedy, supra, n. 5 at 1. 125 The Shipman Inquiry: First Report Volume One: Death Disguised (2002) at 14.15. 126 See http://www.nhsla.com/RiskManagement/. 127 For a discussion on the economic incentives for error reduction see A. Gray, ‘Adverse Events and the NHS: An Economic Analysis’ (2003); http://www.npsa.nhs.uk. 128 Healthcare Commission, ‘Assessment for Improvement: The Annual Health Check’ (Healthcare Commission 2005). 129 ‘The Management of Suspensions of Clinical Staff in NHS Hospital and Ambulance Trusts in England’ Report by the Comptroller and Auditor General H.C. 1143 2003. 130 R. Dingwall, supra, n. 78 at 478. 131 U. Beck, supra, n. 77 at 33 (emphasis in original). 132 D. Irvine, The Doctors' Tale, Professionalism and Public Trust (Radcliffe Medical Press 2003). 133 ‘Hospital Did Not Stop Killer Surgeon’ The Times 24.6.04, p. 21. © The Author [2005]. Published by Oxford University Press; all rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org TI - Outing Medical Errors: Questions of Trust and Responsibility JF - Medical Law Review DO - 10.1093/medlaw/fwi042 DA - 2006-04-01 UR - https://www.deepdyve.com/lp/oxford-university-press/outing-medical-errors-questions-of-trust-and-responsibility-JhDCYW850B SP - 22 EP - 43 VL - 14 IS - 1 DP - DeepDyve ER -