Whether it was the worryingly blatant lack of ethical thought shown by the Nazis in their concentration camp experiments, or the introduction of a new era of nuclear science after America’s 1945 bombing of Hiroshima and Nagasaki, scientific research was changing and its ethics were crying out for change at the end of World War II. This view was shared by judges at the Nuremburg Trials, who produced the Nuremburg Code in 1947 to act as a set of international guidelines for medical research. By comparing scientific and medical ethics before and after it was drafted, it will become clear that even though the Code’s impact was minimal until well into the 1960s, medical ethics became a prevalent socio-political issue that kept its relevance throughout the Cold War. It also set in motion a progressive, linear timeline along which scientific ethics have drastically improved in regulation and implementation. The Nuremburg Code has become the cornerstone of modern medical practice, and given how the scale of nuclear science provided a whole new set of ethical considerations, World War II must be seen as a major turning point in the history of medical ethics.
Firstly, the standards of scientific practice and ethics in 1945 must be considered to provide a starting point, to which future developments can be compared to. In a horribly ironic case given what occurred, Germany had arguably the clearest medical guidelines in the world by the 1940s (the term ‘medical’ will refer to studies relating to humans). These dated back to the 1900 Prussian Regulation, probably the first official code in Western medicine, which set the precedents for the requirement of informed consent. It was created by the Prussian Parliament following the work of Albert Neisser in 1898, who injected unknowing inpatients with a serum that eventually caused, not prevented, syphilis. Nontherapeutic experiments were to be prohibited if the subject had not given their “unambiguous consent”, or if the consent had not been preceded by an explanation of the possible negative consequences. Furthermore, in 1931 the Reich government issued Regulations Concerning New Therapy and Human Experimentation, reasserting that nontherapeutic research was forbidden without informed consent. Ethical and legal regulations (especially the concept of informed consent) clearly weren’t new ideas in Germany when World War II began, yet without an institution to enforce them within medical circles, scientists became lethal instruments of Nazi crime. Inhabitants of concentration camps were used to investigate biological or chemical weapons, and seemingly perverse theories on eugenics and the body. These ranged from the forced sterilisation and murder of the handicapped, to prolonged submersion of live subjects in freezing water, to deliberate infection with cancer, typhus or malaria. Ulf Schmidt has argued that the decay of German medical ethics was caused by World War I and the Great Depression, as they eroded liberal values, caused a prioritisation of the nation’s survival over the individual, and led to a radicalisation of medical and political opinions on racial hygiene. This is certainly worth considering given that the government’s 1931 regulations were probably issued to counter the emerging convergence of racial extremism and science. It failed to do so though, and as the horrific product of this convergence emerged at the Doctors’ trial in 1947, there was widespread public outcry that these medical professionals – trained in an advanced scientific culture to care for the sick – had instead disregarded human dignity and inflicted pain and murder on innocent people. Those in legal and public circles had presumed a traditional ethos of care lay at the heart of medical practice, but the abuse of the experimenter’s patriarchal position forced the judges at Nuremburg to produce an international code that became the beating heart of human rights in medical research.
World War II is also considered a turning point in scientific ethics, for it was the first showcase of the atom’s destructive power. ‘Little Boy’ and ‘Fat Man’ indeed delivered President Truman’s fateful promise of a “rain of ruin from the air, the like of which [had] never been seen before”. Public, military and scientific communities witnessed the power of the atom, which simultaneously ended the war and left Japanese survivors to deal with debilitating illnesses for the foreseeable future. A frenzy of testing ensued, changing the ethics of scientific research dramatically because many more people were at risk than in most medical investigations. Moreover, whilst ethical concerns should have been further amplified by the limited knowledge of fallout’s effects, the pressures of the Cold War prevented this from occurring. World War II marked both the lowest point in medical ethics and the most ground-breaking platform for future scientific research, and its effects on the field of science must be seen as profound, even if the ethical practice that arose was not of sufficient standards. Most importantly though, blatant betrayal of national scientific guidelines by the Nazis forced the production of the Nuremburg Code. Had Nazi experiments not been so cruel, perhaps the medical world may not have seen a such an effort to change its ethics for many more years.
Unfortunately though, the Nuremburg Code failed to significantly affect medical research in the first decade or so of it’s life. Events in Nazi Germany had shown that without an institution to implement ethical regulations, violations could easily go unprosecuted, and the Nuremburg Code suffered a similar fate. Yet most damning was the reluctance of Western medical practitioners to recognise Nazi experiments as ‘science’, which led to them exempting themselves from the Code. In the words of Jay Katz, it was seen as a “good code for barbarians but an unnecessary code” for ordinary scientists. There has been much discourse over whether Nazi science should indeed be labelled as ‘science’ at all, and thus whether it deserves a place in the history of medical ethics. Paul Julian Weindling supports the agreement between Jay Katz and Henry Beecher, that the extreme cruelty of their experiments should facilitate their separation from other medical practice. Ulf Schmidt equally supports that Nazi science was so heavily influenced by their ideological quest for racial purity that it must be seen as an exception. Contrastingly, Robert Proctor has explored the pioneering work of Nazi scientists like Eduard Pernkopf, who wrote the Pernkopf Atlas which is now considered the bible of anatomical works. Proctor concludes that whilst it may be easy to relegate all Nazi research to that of ‘medicine gone mad’, we must remember that the unethical practice occurred concurrently with pioneering, ‘good’ scientific investigations. Put simply, the Nazis sought an advancement of all medical knowledge at any sacrifice. Yet for Western researchers, accepting the Code would mean accepting Nazi practice as ‘science’, and they were rightly keen to avoid any association with German study. Unfortunately, this attitude was widely shared, and the Code’s immediate impact was thus destined to be minimal at best.
As the Cold War progressed, scientific progress was considered an ‘essential key’ to national security, as it had been by Vannevar Bush in 1945. Yet the increasing pressures placed onto military science in this quest for progress forced the sacrifice of ethical considerations. After the Soviets captured German nerve gas plants, Britain’s state-funded Porton Down research facility tested nerve agents on armed forces volunteers.
However, staff removed any mention of possible physical discomfort from the recruitment literature, even after the tragic death of Airman Maddison from nerve gas poisoning in May 1953. Similarly, Operation Sea Spray in 1950 was another example of Western military scientists ignoring the Code. The US Army tested the susceptibility of San Francisco to bacterial warfare by secretly attacking the city using two, ‘harmless’, bacterial microbes, Serratia marcescens and Bacillus globigii. Multiple people were hospitalized with urinary tract infections, and one man died, with his post mortem urine sample containing Serratia marcescens. Although the government was acquitted of any responsibility, the mock attack remains one of the biggest violations of the Nuremburg Code, with the infected public completely unaware that the experiment was occurring..
Away from military oriented research, the infamous Tuskegee syphilis study of 1932 to 1972 searched for the most effective treatment of venereal disease for soldiers. Whilst the men were not deliberately infected, they were deliberately not treated, even after the US Public Health Service deemed penicillin to be most effective in 1943, and this was surely a product of the underlying racism in America when the study began. Whilst a clear violation of the Code, there was a larger issue at play too, and a clear comparison can be drawn between the racist natures of both the Tuskegee study and Nazi experiments. Furthermore, the studies were similar in that they both began in the interests of national security (before unethical ideologies prevailed) and in that both withheld experimental risks from those involved. Whilst American research and Nazi murder “were set up as opposites”, clearly the divide between ‘barbarians’ and ‘ordinary scientists’ was more blurred than anticipated. Clearly, the concepts embodied within the Nuremburg Code had not improved medical ethics in this decade; instead, it had an underwhelming impact reminiscent of pre-war German regulations. With Western researchers clinging to their traditional paternalistic obligations and assumptions, scientific progression and national security were continually prioritised over the wellbeing of experimental participants amidst the Cold War.
Contrastingly, nuclear science was in its international infancy in the 1950s, with new ethical issues arising from the larger numbers of people that were involved in these ‘Big Science’ experiments. In truth the horrific effects of radiation sickness, seen in media coverage of Japanese nuclear survivors, meant that the ethical conduct of nuclear testing was subject to great scrutiny before it even began. Soviet and American nuclear scientists did little to calm any fears however, because the competitive politics of the Cold War once more forced a prioritisation of national security over the individual and thus a neglect of the Nuremburg Code. One of the starkest examples was the Castle Bravo nuclear tests in the Bikini Atoll in 1954, described by John Krige as “a technological triumph, but a human and public relations disaster”. When the US dropped hydrogen bomb ‘Bravo’, both Japanese fishermen aboard ‘Lucky Dragon’ and residents of the Marshall Island’s Rongelap and Utrik Atolls were unethically exposed to dangerous levels of radioactive fallout, and they suffered from nausea, skin irritation and, in one fisherman’s case, death. The atoll populations were neither informed nor relocated prior to the detonation, yet the test went ahead at the command of Major General Percy Clarkson, who actually knew there was a likelihood that unfavourable winds would carry fallout to the atolls. Not only did the experiment take place without the consent of those in the area, it also went ahead despite the almost inevitable chance of disabling injury or death. Moreover, Clarkson’s reluctance to wait for more favourable winds (as there had been 3 days before) is further evidence of an arrogance in Western research which caused professionals to ignore the Nuremburg Code. Importantly though, the US responded with reparations, health care, increased investigations into civilian-based atomic applications in line with Eisenhower’s ‘Atoms for Peace’ speech. Governments were becoming more concerned with public reaction to unethical scientific practice, and this was a direct result of the public nature of the Nuremburg Trials. However, the military’s prioritisation of national security in the 1950s and 60s meant that thousands of unknowing, un-consenting civilians were once more exposed to great danger.
As was the same in medical research, violations of the Nuremburg Code in nuclear science were not confined to military-led investigations. Soviet scientists defied the Code’s concepts through their nuclear waste disposal from their plutonium plants, as Kate Brown investigated in her book Plutopia. Inspired by America’s Hanford Plutonium Plant, leaders of the Maiak Plutonium Plant pumped low and medium-level waste (categorised by number of radioactive isotopes) into the Techa river, and diverted high-level waste into underground containers. When these containers inevitably filled up, the waste containing fatal levels of radiation was also disposed into the Techa, and from 1949-1951 around 20% of the river consisted of plant effluent, which 124,000 people unknowingly used for drinking, washing and watering crops. In 1951, a team of Soviet radiation monitors discovered that inhabitants of Metlino, the river town closest to the plant, were radiating at levels 100 times greater than normal, and residents were swiftly relocated without explanation. Whilst they obviously knew of Maiak’s existence (and welcomed the economic opportunities it provided), this river dependent portion of the population were not informed about the nuclear waste, which meant they were unknowingly exposing themselves to fatal radiation levels (as the Atoll populations would also do following the Castle Bravo tests). Yet plutonium production and atomic weapons testing could not afford to be stopped at this time, for nuclear war seemed imminent whilst the Soviets and Americans faced off in Korea and China. It remains unethical, but the decisions that were made can at least be understood in the context of the Cold War. Moreover, the responses of both sides to their tragic mistakes (public apologies and reparations in the US and swift relocations in the Soviet Union) again showed a humility that suggests the Nuremburg Code had infiltrated scientific research ethics after all. How sincere these apologies were is up for debate, clearly governments were beginning to feel responsible for the protection of their people during scientific research as ethical considerations began to infiltrate society at all levels. Although the pressures of the nuclear arms race prevented the full implementation of the Nuremburg Code in science in the 1950s and 1960s, researchers and governments appeared much more aware of the issue, evidence perhaps that the Code was beginning to have an effect.
Whilst the Nuremburg Code’s immediate impact within both medical and scientific circles was far less profound than expected, its legacy has been central to the vastly improved research ethics that we know today. When the Nuremburg Trials exposed the results of improper medical conduct, a horrified public ensured that medical ethics became a much more widely discussed socio-political issue, and the birth of nuclear science saw public scrutiny widen to include ‘Big Science’ experiments. Public outcry at studies in Tuskegee, San Francisco and Bikini Atoll was matched by intra-leadership horror to the Techa river’s radiation, and governments had to scramble to save face after violating the Nuremburg Code’s ethical regulations. Whilst there had been a similar governmental reactio
n to the Neisser case of the late 1890s, there was a comparatively small level of public outcry at the experiment because ethics weren’t a widely discussed issue yet, and this remains one of the Code’s greatest impacts throughout the 1950s. It also lay the foundations upon which future regulations could help tighten the restrictions on scientific practice. The Helsinki Declaration, adopted by the World Medical Association (WMA) in 1964, was the first robust reinforcement of the Code and added the component of an independent review to experiments. The Declaration ensured there were now two separate walls of protection for research subjects, in the requirement of informed consent and in the independent review, which could threaten convictions for unethical conduct. Even modern classified research in the US remains subject to internal reviews and security checks on informed consent forms, proving that the concepts embodied in these regulations have indeed impacted modern research ethics. Following the Declaration were the CIOMS regulations of 1982 that enforced the need for cultural sensitivity when selecting volunteers, and then in 1996 the newly formed ICH announced goals to impose the universal standards of medical ethics on experimental sponsors. Clearly, there has been a growing codification of medical regulation in the last sixty years (indeed around a fifth of all international codes relate to human experimentation), and an increase in governmental intervention (the French national bioethical committee published the nation’s Laws of Bioethics in 1994). However, Ulrich Tröhler delivered a scathing review of medical ethics, asserting that this history remains “one of continuous abuses” despite the Nuremburg Code, which was incidentally “endorsed in very few places and had virtually no impact” on scientific practice. The Helsinki Declaration receives similar criticism described as “paternalistic” and in need of further amendments, a view supported by Ulf Schmidt. Although the Helsinki Declaration did facilitate research professionals more power by weakening the importance of informed consent (which was now only necessary ‘if possible’), the addition of the independent review process kept researchers firmly in line. Moreover, whilst Tröhler may have seen the numerous new regulations as nothing but “emblematic symbols”, they undoubtedly maintained the public relevance of medical ethics and ensured that experimenters feign ignorance as those in mid-twentieth century Europe did. Informed consent and human rights have become central issues in science ever since the historic Nuremburg Code which, as Evelyne Shuster concluded, “changed forever the way both physicians and the public view the proper conduct of medical research”.
In summary, the history of medical ethics in the last 50 years has not been perfect, yet it more closely resembles a progressive, linear timeline than Tröhler’s scathing description. The Code’s impact was unquestionably underwhelming during the 1950s and 1960s, but this was due to the reluctance to accept Nazi practice as ‘science’ and the intense military pressures of the early Cold War, both of which resulted in an erosion of liberal values similar to that of German science after World War I. Despite the unethical practice that resulted, the increase in public outcry and governmental apologies (caused initially by the public exposure of Nazi medical practice) shows that research ethics was becoming a more prevalent issue than ever before. Furthermore, the Helsinki Declaration revised and reinforced most of the Code’s concepts, and the ensuing codification of medical ethics ensured it remained publicly relevant and universally accepted. World War II must therefore be seen as a major turning point in the ethics of both scientific and medical research and practice. Not only did it facilitate the birth of ‘Big Science’ and the new ethical considerations that came with it, but most importantly the unethical practice of the Nazis caused the publication of the most influential code in medical history, which influenced how the issue was seen in public, medical and political circles. The horrific concentration camp experiments have ultimately improved medical ethics beyond question. The lives that lost to science in concentration camps across Europe have not been for nothing after all.