* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download Medical Advancements - Unit 4 (2)
Survey
Document related concepts
Transcript
Penicillin From Wikipedia, the free encyclopedia Penicillin core structure, where "R" is the variable group. Penicillin (sometimes abbreviated PCN or pen) is a group of antibiotics derived from Penicillium fungi,[1] including penicillin G (intravenous use), penicillin V (oral use), procaine penicillin, and benzathine penicillin (intramuscular use). Penicillin antibiotics were among the first drugs to be effective against many previously serious diseases, such as bacterial infections caused by staphylococci and streptococci. Penicillins are still widely used today, though misuse has now made many types of bacteria resistant. All penicillins are β-lactam antibiotics and are used in the treatment of bacterial infections caused by susceptible, usually Gram-positive, organisms. Several enhanced penicillin families also exist, effective against additional bacteria: these include the antistaphylococcal penicillins, aminopenicillins and the more-powerful antipseudomonal penicillins. Medical uses The term "penicillin" is often used generically to refer to benzylpenicillin (penicillin G, the original penicillin found in 1928), procaine benzylpenicillin (procaine penicillin), benzathine benzylpenicillin (benzathine penicillin), and phenoxymethylpenicillin (penicillin V). Procaine penicillin and benzathine penicillin have the same antibacterial activity as benzylpenicillin but act for a longer period of time. Phenoxymethylpenicillin is less active against gram-negative bacteria than benzylpenicillin.[2][3] Benzylpenicillin, procaine penicillin and benzathine penicillin are given by injection (parenterally), but phenoxymethylpenicillin is given orally. Susceptibility Despite the expanding number of penicillin resistant bacteria, penicillin can still be used to treat a wide range of infections caused by certain susceptible bacteria. Some of these bacteria include Streptococci, Staphylococci, Clostridium, and Listeria genera. The following list illustrates minimum inhibitory concentration susceptibility data for a few medically significant bacteria:[4][5] Listeria monocytogenes: from less than or equal to 0.06 μg/ml to 0.25 μg/ml Page 1 of 52 Neisseria meningitidis: from less than or equal to 0.03 μg/ml to 0.5 μg/ml Staphylococcus aureus: from less than or equal to 0.015 μg/ml to more than 32 μg/ml Adverse effects Main article: Penicillin drug reaction Common adverse drug reactions (≥ 1% of patients) associated with use of the penicillins include diarrhoea, hypersensitivity, nausea, rash, neurotoxicity, urticaria, and superinfection (including candidiasis). Infrequent adverse effects (0.1–1% of patients) include fever, vomiting, erythema, dermatitis, angioedema, seizures (especially in people with epilepsy), and pseudomembranous colitis.[6] History Discovery Main article: History of penicillin Alexander Fleming, who is credited with discovering penicillin in 1928. Sample of penicillium mould presented by Alexander Fleming to Douglas Macleod, 1935 In 1897 a French physician, Ernest Duchesne at École du Service de Santé Militaire in Lyons, published a medical thesis entitled Contribution à l’étude de la concurrence vitale chez les micro-organismes : antagonisme entre les moisissures et les microbes (Contribution to the study in vital competition in microorganisms: antagonism between molds and microbes) in which he specifically studied the interaction between Escherichia coli and Penicillium glaucum. He independently discovered healing properties of P. glaucum, even curing infected guinea pigs from typhoid. His dissertation[16] was ignored by the Institut Pasteur. Although he is the precursor to antibiotic-mediated therapy and penicillin in particular, his works were subsequently forgotten.[17] The discovery of penicillin is attributed to Scottish scientist and Nobel laureate Alexander Fleming in 1928.[18] He showed that, if Penicillium rubens[19] were grown in the appropriate substrate, it would exude a substance with antibiotic properties, which he dubbed penicillin. This serendipitous observation began the modern era of antibiotic discovery. The development of penicillin for use as a medicine is attributed to the Australian Nobel laureate Howard Walter Page 2 of 52 Florey, together with the German Nobel laureate Ernst Chain and the English biochemist Norman Heatley.[20] Fleming recounted that the date of his discovery of penicillin was on the morning of Friday, September 28, 1928.[21] The traditional version of this story describes the discovery as a fortuitous accident: in his laboratory in the basement of St Mary's Hospital in London (now part of Imperial College), Fleming noticed a Petri dish containing Staphylococcus that had been mistakenly left open, was contaminated by blue-green mould from an open window, which formed a visible growth.[22] There was a halo of inhibited bacterial growth around the mould. Fleming concluded that the mould released a substance that repressed the growth and caused lysing of the bacteria.[20] Scientists now suspect that Fleming’s story of the initial discovery of the antibacterial properties of the penicillium mould is inaccurate. With a modern understanding of how the bacteria and the mould interact, scientists know that if bacteria were already present on the petri dish they would have inhibited the growth of the mould and Fleming would not have noticed any mould on the plate at all. A more likely story is that a spore from a laboratory one floor below, run by C. J. La Touche, was transferred to Fleming's petri dish before the bacteria were added. At the time of the initial discovery La Touche was working with the same mould found in Fleming's petri dish.[22] Once Fleming made his discovery he grew a pure culture and discovered it was a Penicillium mould, now known to be Penicillium notatum. Fleming coined the term "penicillin" to describe the filtrate of a broth culture of the Penicillium mould. Fleming asked C. J. La Touche to help identify the mould, which he incorrectly identified as Penicillium rubrum (later corrected by Charles Thom). He expressed initial optimism that penicillin would be a useful disinfectant, because of its high potency and minimal toxicity in comparison to antiseptics of the day, and noted its laboratory value in the isolation of Bacillus influenzae (now called Haemophilus influenzae).[23][22] Fleming was a famously poor communicator and orator, which meant his findings were not initially given much attention.[22] He was unable to convince a true chemist to help him extract and stabilize the antibacterial compound found in the broth filtrate. Despite the lack of a true chemist, he remained interested in the potential use of penicillin and presented a paper entitled "A Medium for the Isolation of Pfeiffer’s Bacillus" to the medical research club of London, which was met with little interest and even less enthusiasm by his peers. Had Fleming been more successful at making other scientists interested in his work, penicillin for medicinal use would possibly have been developed years earlier.[22] Despite the lack of interest of his fellow scientists, he did conduct several experiments on the antibiotic substance he discovered. The most important result proved it was nontoxic in humans by first performing toxicity tests in animals and then on humans. His following experiments on penicillin's response to heat and pH allowed Fleming to increase the stability of the compound.[23] The one test that modern scientists would find missing from his work was the test of penicillin on an infected animal, the results of which would likely have sparked great interest in penicillin and sped its development by almost a decade.[22] Medical application Page 3 of 52 Florey (pictured), Fleming and Chain shared a Nobel Prize in 1945 for their work on penicillin. In 1930, Cecil George Paine, a pathologist at the Royal Infirmary in Sheffield, attempted to use penicillin to treat sycosis barbae, eruptions in beard follicles, but was unsuccessful. Moving on to ophthalmia neonatorum, a gonococcal infection in infants, he achieved the first recorded cure with penicillin, on November 25, 1930. He then cured four additional patients (one adult and three infants) of eye infections, and failed to cure a fifth.[24][25][26] In 1939, Australian scientist Howard Florey (later Baron Florey) and a team of researchers (Ernst Boris Chain, Arthur Duncan Gardner, Norman Heatley, M. Jennings, J. Orr-Ewing and G. Sanders) at the Sir William Dunn School of Pathology, University of Oxford made progress in showing the in vivo bactericidal action of penicillin. In 1940 they showed that penicillin effectively cured bacterial infection in mice.[27][28] In 1941 they treated a policeman, Albert Alexander, with a severe face infection; his condition improved, but then supplies of penicillin ran out and he died. Subsequently, several other patients were treated successfully.[29] Page 4 of 52 Sulfonamide (medicine) From Wikipedia, the free encyclopedia Sulfonamide functional group Hydrochlorothiazide is a sulfonamide and a thiazide. Furosemide is a sulfonamide, but not a thiazide. Sulfamethoxazole is a antibacterial sulfonamide Sulfonamide or sulphonamide is the basis of several groups of drugs. The original antibacterial sulfonamides (sometimes called sulfa drugs or sulpha drugs) are synthetic antimicrobial agents that contain the sulfonamide group. Some sulfonamides are also devoid of antibacterial activity, e.g., the anticonvulsant sultiame. The sulfonylureas and thiazide diuretics are newer drug groups based on the antibacterial sulfonamides.[1][2] Page 5 of 52 Allergies to sulfonamide are common,[3] hence medications containing sulfonamides are prescribed carefully. It is important to make a distinction between sulfa drugs and other sulfurcontaining drugs and additives, such as sulfates and sulfites, which are chemically unrelated to the sulfonamide group, and do not cause the same hypersensitivity reactions seen in the sulfonamides. Because sulfonamides displace bilirubin from albumin, kernicterus (brain damage due to excess bilirubin) is an important potential side effect of sulfonamide use. Function Antimicrobial Main article: Dihydropteroate synthetase inhibitor In bacteria, antibacterial sulfonamides act as competitive inhibitors of the enzyme dihydropteroate synthetase (DHPS), an enzyme involved in folate synthesis. Sulfonamides are therefore bacteriostatic and inhibit growth and multiplication of bacteria, but do not kill them. Humans, in contrast to bacteria, acquire folate (vitamin B9) through the diet.[4] Structural similarity between sulfonamide (left) and PABA (center) is the basis for the inhibitory activity of sulfa drugs on dihydrofolate (right) biosynthesis Other uses The sulfonamide chemical moiety is also present in other medications that are not antimicrobials, including thiazide diuretics (including hydrochlorothiazide, metolazone, and indapamide, among others), loop diuretics (including furosemide, bumetanide, and torsemide), acetazolamide, sulfonylureas (including glipizide, glyburide, among others), and some COX-2 inhibitors (e.g., celecoxib). Sulfasalazine, in addition to its use as an antibiotic, is also used in the treatment of inflammatory bowel disease. History Page 6 of 52 Sulfonamide drugs were the first antibiotics to be used systemically, and paved the way for the antibiotic revolution in medicine. The first sulfonamide, trade-named Prontosil, was a prodrug. Experiments with Prontosil began in 1932 in the laboratories of Bayer AG, at that time a component of the huge German chemical trust IG Farben. The Bayer team believed that coal-tar dyes which are able to bind preferentially to bacteria and parasites might be used to attack harmful organisms in the body. After years of fruitless trial-and-error work on hundreds of dyes, a team led by physician/researcher Gerhard Domagk[5] (working under the general direction of Farben executive Heinrich Hörlein) finally found one that worked: a red dye synthesized by Bayer chemist Josef Klarer that had remarkable effects on stopping some bacterial infections in mice.[6] The first official communication about the breakthrough discovery was not published until 1935, more than two years after the drug was patented by Klarer and his research partner Fritz Mietzsch. Prontosil, as Bayer named the new drug, was the first medicine ever discovered that could effectively treat a range of bacterial infections inside the body. It had a strong protective action against infections caused by streptococci, including blood infections, childbed fever, and erysipelas, and a lesser effect on infections caused by other cocci. However, it had no effect at all in the test tube, exerting its antibacterial action only in live animals. Later, it was discovered by Bovet,[7] Federico Nitti and J. and Th. Jacques Tréfouël, a French research team led by Ernest Fourneau at the Pasteur Institute, that the drug was metabolized into two pieces inside the body, releasing from the inactive dye portion a smaller, colorless, active compound called sulfanilamide.[8] The discovery helped establish the concept of "bioactivation" and dashed the German corporation's dreams of enormous profit; the active molecule sulfanilamide (or sulfa) had first been synthesized in 1906 and was widely used in the dye-making industry; its patent had since expired and the drug was available to anyone.[9] The result was a sulfa craze.[10] For several years in the late 1930s, hundreds of manufacturers produced tens of thousands of tons of myriad forms of sulfa. This and nonexistent testing requirements led to the elixir sulfanilamide disaster in the fall of 1937, during which at least 100 people were poisoned with diethylene glycol. This led to the passage of the Federal Food, Drug, and Cosmetic Act in 1938 in the United States. As the first and only effective antibiotic available in the years before penicillin, sulfa drugs continued to thrive through the early years of World War II.[11] They are credited with saving the lives of tens of thousands of patients, including Franklin Delano Roosevelt, Jr. (son of US President Franklin Delano Roosevelt) and Winston Churchill. Sulfa had a central role in preventing wound infections during the war. American soldiers were issued a first-aid kit containing sulfa pills and powder, and were told to sprinkle it on any open wound. The sulfanilamide compound is more active in the protonated form. The drug has very low solubility and sometimes can crystallize in the kidneys, due to its first pKa of around 10. This is a very painful experience, so patients are told to take the medication with copious amounts of water. Newer analogous compounds prevent this complication because they have a lower pKa, around 5–6,[citation needed] making them more likely to remain in a soluble form. Many thousands of molecules containing the sulfanilamide structure have been created since its discovery (by one account, over 5,400 permutations by 1945), yielding improved formulations Page 7 of 52 with greater effectiveness and less toxicity. Sulfa drugs are still widely used for conditions such as acne and urinary tract infections, and are receiving renewed interest for the treatment of infections caused by bacteria resistant to other antibiotics. Page 8 of 52 World Health Organization From Wikipedia, the free encyclopedia World Health Organization Flag of the World Health Organization Abbreviation Formation Type WHO OMS 7 April 1948 Specialized agency of the United Nations Legal status Active Headquarters Geneva, Switzerland Head Margaret Chan Parent organization United Nations Economic and Social Website www.who.int Council (ECOSOC) Page 9 of 52 The World Health Organization (WHO; /huː/) is a specialized agency of the United Nations (UN) that is concerned with international public health. It was established on 7 April 1948, headquartered in Geneva, Switzerland. The WHO is a member of the United Nations Development Group. Its predecessor, the Health Organization, was an agency of the League of Nations. The constitution of the World Health Organization had been signed by 61 countries on 22 July 1946, with the first meeting of the World Health Assembly finishing on 24 July 1948. It incorporated the Office International d'Hygiène Publique and the League of Nations Health Organization. Since its creation, it has played a leading role in the eradication of smallpox. Its current priorities include communicable diseases, in particular HIV/AIDS, Ebola, malaria and tuberculosis; the mitigation of the effects of non-communicable diseases; sexual and reproductive health, development, and aging; nutrition, food security and healthy eating; occupational health; substance abuse; and driving the development of reporting, publications, and networking. The WHO is responsible for the World Health Report, a leading international publication on health, the worldwide World Health Survey, and World Health Day (7 April of every year). The head of WHO is Margaret Chan. The 2014/2015 proposed budget of the WHO is about US$4 billion.[1] About US$930 million is to be provided by member states with a further US$3 billion to be from voluntary contributions.[1] History Establishment During the 1945 United Nations Conference on International Organization, Dr. Szeming Sze, a delegate from China, conferred with Norwegian and Brazilian delegates on creating an international health organization under the auspices of the new United Nations. After failing to get a resolution passed on the subject, Alger Hiss, the Secretary General of the conference, recommended using a declaration to establish such an organization. Dr. Sze and other delegates lobbied and a declaration passed calling for an international conference on health.[2] The use of the word "world", rather than "international",emphasized the truly global nature of what the organization was seeking to achieve.[3] The constitution of the World Health Organization was signed by all 51 countries of the United Nations, and by 10 other countries, on 22 July 1946.[4] It thus became the first specialised agency of the United Nations to which every member subscribed.[5] Its constitution formally came into force on the first World Health Day on 7 April 1948, when it was ratified by the 26th member state.[6] The first meeting of the World Health Assembly finished on 24 July 1948, having secured a budget of US$5 million (then GBP£1,250,000) for the 1949 year. Andrija Stampar was the Assembly's first president, and G. Brock Chisholm was appointed Director-General of WHO, having served as Executive Secretary during the planning stages.[3] Its first priorities were to control the spread of malaria, tuberculosis and sexually transmitted infections, and to improve maternal and child health, nutrition and environmental hygiene. Its first legislative act was concerning the compilation of accurate statistics on the spread and morbidity of disease.[3] The logo of the World Health Organization features the Rod of Asclepius as a symbol for healing.[7] Operational history Page 10 of 52 Three former directors of the Global Smallpox Eradication Programme read the news that smallpox had been globally eradicated, 1980 WHO established an epidemiological information service via telex in 1947, and by 1950 a mass tuberculosis inoculation drive (using the BCG vaccine) was under way. In 1955, the malaria eradication programme was launched, although it was later altered in objective. 1965 saw the first report on diabetes mellitus and the creation of the International Agency for Research on Cancer. WHO moved into its headquarters building in 1966. The Expanded Programme on Immunization was started in 1974, as was the control programme into onchocerciasis – an important partnership between the Food and Agriculture Organization (FAO), the United Nations Development Programme (UNDP), and World Bank. In the following year, the Special Programme for Research and Training in Tropical Diseases was also launched. In 1976, the World Health Assembly voted to enact a resolution on Disability Prevention and Rehabilitation, with a focus on community-driven care. The first list of essential medicines was drawn up in 1977, and a year later the ambitious goal of "health for all" was declared. In 1986, WHO started its global programme on the growing problem of HIV/AIDS, followed two years later by additional attention on preventing discrimination against sufferers and UNAIDS was formed in 1996. The Global Polio Eradication Initiative was established in 1988.[8] In 1958, Viktor Zhdanov, Deputy Minister of Health for the USSR, called on the World Health Assembly to undertake a global initiative to eradicate smallpox, resulting in Resolution WHA11.54.[9] At this point, 2 million people were dying from smallpox every year. In 1967, the World Health Organization intensified the global smallpox eradication by contributing $2.4 million annually to the effort and adopted a new disease surveillance method.[10][11] The initial problem the WHO team faced was inadequate reporting of smallpox cases. WHO established a network of consultants who assisted countries in setting up surveillance and containment activities.[12] The WHO also helped contain the last European outbreak in Yugoslavia in 1972.[13] After over two decades of fighting smallpox, the WHO declared in 1979 that the disease had been eradicated – the first disease in history to be eliminated by human effort.[14] In 1998, WHO's Director General highlighted gains in child survival, reduced infant mortality, raised life expectancy and reduced rates of "scourges" such as smallpox and polio on the fiftieth anniversary of WHO's founding. He, did, however, accept that more had to be done to assist maternal health and that progress in this area had been slow.[15] Cholera and malaria have remained problems since WHO's founding, although in decline for a large part of that period.[16] In the twenty-first century, the Stop TB Partnership was created in 2000, along with the UN's formulation of the Millennium Development Goals. The Measles initiative was formed in 2001, Page 11 of 52 and credited with reducing global deaths from the disease by 68% by 2007. In 2002, The Global Fund to Fight AIDS, Tuberculosis and Malaria was drawn up to improve the resources available.[8] In 2006, the organization endorsed the world's first official HIV/AIDS Toolkit for Zimbabwe, which formed the basis for a global prevention, treatment and support plan to fight the AIDS pandemic.[17] Overall focus The WHO's Constitution states that its objective "is the attainment by all people of the highest possible level of health".[18] WHO fulfils its objective through its functions as defined in its Constitution: (a) to act as the directing and co-ordinating authority on international health work; (b) to establish and maintain effective collaboration with the United Nations, specialized agencies, governmental health administrations, professional groups and such other organizations as may be deemed appropriate; (c) to assist Governments, upon request, in strengthening health services; (d) to furnish appropriate technical assistance and, in emergencies, necessary aid upon the request or acceptance of Governments; (e) to provide or assist in providing, upon the request of the United Nations, health services and facilities to special groups, such as the peoples of trust territories; (f) to establish and maintain such administrative and technical services as may be required, including epidemiological and statistical services; (g) to stimulate and advance work to eradicate epidemic, endemic and other diseases; (h) to promote, in co-operation with other specialized agencies where necessary, the prevention of accidental injuries; (i) to promote, in co-operation with other specialized agencies where necessary, the improvement of nutrition, housing, sanitation, recreation, economic or working conditions and other aspects of environmental hygiene; (j) to promote co-operation among scientific and professional groups which contribute to the advancement of health; (k) to propose conventions, agreements and regulations, and make recommendations with respect to international health matters and to perform. WHO currently defines its role in public health as follows:[19] providing leadership on matters critical to health and engaging in partnerships where joint action is needed; shaping the research agenda and stimulating the generation, translation and dissemination of valuable knowledge; setting norms and standards and promoting and monitoring their implementation; articulating ethical and evidence-based policy options; providing technical support, catalyzing change, and building sustainable institutional capacity; and monitoring the health situation and assessing health trends. Communicable diseases The 2012–2013 WHO budget identified 13 areas among which funding was distributed.[20] Two of those thirteen areas related to communicable diseases: the first, to reduce the "health, social Page 12 of 52 and economic burden" of communicable diseases in general; the second to combat HIV/AIDS, malaria and tuberculosis in particular.[20] In terms of HIV/AIDS, WHO works within the UNAIDS network and considers it important that it works in alignment with UNAIDS objectives and strategies. It also strives to involve sections of society other than health to help deal with the economic and social effects of the disease.[21] In line with UNAIDS, WHO has set itself the interim task between 2009 and 2015 of reducing the number of those aged 15–24 years who are infected by 50%; reducing new HIV infections in children by 90%; and reducing HIV-related deaths by 25%.[22] Although WHO dropped its commitment to a global malaria eradication campaign in the 1970s as too ambitious, it retains a strong commitment to malaria control. WHO's Global Malaria Programme works to keep track of malaria cases, and future problems in malaria control schemes. WHO is to report, likely in 2015, as to whether RTS,S/AS01, currently in research, is a viable malaria vaccine. For the time being, insecticide-treated mosquito nets and insecticide sprays are used to prevent the spread of malaria, as are antimalarial drugs – particularly to vulnerable people such as pregnant women and young children.[23] WHO's help has contributed to a 40% fall in the number of deaths from tuberculosis between 1990 and 2010, and since 2005, it claims that over 46 million people have been treated and an estimated 7 million lives saved through practices advocated by WHO. These include engaging national governments and their financing, early diagnosis, standardising treatment, monitoring of the spread and impact of tuberculosis and stabilising the drug supply. It has also recognised the vulnerability of victims of HIV/AIDS to tuberculosis.[24] WHO aims to eradicate polio. It has also been successful in helping to reduce cases by 99% since the Global Polio Eradication Initiative was launched in 1988, which partnered WHO with Rotary International, the US Centers for Disease Control and Prevention (CDC) and the United Nations Children's Fund (UNICEF), as well as smaller organizations. It works to immunize young children and prevent the re-emergence of cases in countries declared "polio-free".[25] Non-communicable diseases, mental health and injuries Another of the thirteen WHO priority areas is aimed at the prevention and reduction of "disease, disability and premature deaths from chronic noncommunicable diseases, mental disorders, violence and injuries, and visual impairment".[20][26] For example, the WHO promotes road safety as a means to reduce traffic-related injuries.[27] WHO has also worked on global initiatives in surgery, including emergency and essential surgical care,[28] trauma care,[29] and safe surgery.[30] The WHO Surgical Safety Checklist is in current use worldwide in the effort to improve patient safety.[31] Life course and life style Page 13 of 52 WHO works to "reduce morbidity and mortality and improve health during key stages of life, including pregnancy, childbirth, the neonatal period, childhood and adolescence, and improve sexual and reproductive health and promote active and healthy aging for all individuals".[20][32] It also tries to prevent or reduce risk factors for "health conditions associated with use of tobacco, alcohol, drugs and other psychoactive substances, unhealthy diets and physical inactivity and unsafe sex".[20][33][34] WHO works to improve nutrition, food safety and food security and to ensure this has a positive effect on public health and sustainable development.[20] Emergency work When any sort of disaster or emergency occurs, it is WHO's stated objective to reduce any consequences the event may have on world health and its social and economic implications.[20] On 5 May 2014, WHO announced that the spread of polio is a world health emergency – outbreaks of the disease in Asia, Africa and the Middle East are considered "extraordinary".[35][36] On 8 August 2014, WHO declared that the spread of Ebola is a public health emergency; an outbreak which is believed to have started in Guinea, has spread to other nearby countries such as Liberia and Sierra Leone. The situation in West Africa is considered very serious.[37] Health policy WHO addresses government health policy with two aims: firstly, "to address the underlying social and economic determinants of health through policies and programmes that enhance health equity and integrate pro-poor, gender-responsive, and human rights-based approaches" and secondly "to promote a healthier environment, intensify primary prevention and influence public policies in all sectors so as to address the root causes of environmental threats to health".[20] The organization develops and promotes the use of evidence-based tools, norms and standards to support member states to inform health policy options. It oversees the implementation of the International Health Regulations, and publishes a series of medical classifications; of these, three are overreaching "reference classifications": the International Statistical Classification of Diseases (ICD), the International Classification of Functioning, Disability and Health (ICF) and the International Classification of Health Interventions (ICHI).[38] Other international policy frameworks produced by WHO include the International Code of Marketing of Breast-milk Substitutes (adopted in 1981),[39] Framework Convention on Tobacco Control (adopted in 2003)[40] and the Global Code of Practice on the International Recruitment of Health Personnel (adopted in 2010).[41] In terms of health services, WHO looks to improve "governance, financing, staffing and management" and the availability and quality of evidence and research to guide policy making. It also strives to "ensure improved access, quality and use of medical products and technologies".[20] Page 14 of 52 Governance and support The remaining two of WHO's thirteen identified policy areas relate to the role of WHO itself:[20] "to provide leadership, strengthen governance and foster partnership and collaboration with countries, the United Nations system, and other stakeholders in order to fulfill the mandate of WHO in advancing the global health agenda"; and "to develop and sustain WHO as a flexible, learning organization, enabling it to carry out its mandate more efficiently and effectively". Partnerships The WHO along with the World Bank constitute the core team responsible for administering the International Health Partnership (IHP+). The IHP+ is a group of partner governments, development agencies, civil society and others committed to improving the health of citizens in developing countries. Partners work together to put international principles for aid effectiveness and development cooperation into practice in the health sector.[42] The organization relies on contributions from renowned scientists and professionals to inform its work, such as the WHO Expert Committee on Biological Standardization,[43] the WHO Expert Committee on Leprosy,[44] and the WHO Study Group on Interprofessional Education & Collaborative Practice.[45] WHO runs the Alliance for Health Policy and Systems Research, targeted at improving health policy and systems.[46] WHO also aims to improve access to health research and literature in developing countries such as through the HINARI network.[47] Public health education and action Each year, the organization marks World Health Day and other observances focusing on a specific health promotion topic. World Health Day falls on 7 April each year, timed to match the anniversary of WHO's founding. Recent themes have been vector-borne diseases (2014), healthy ageing (2012) and drug resistance (2011).[48] The other official global public health campaigns marked by WHO are World Tuberculosis Day, World Immunization Week, World Malaria Day, World No Tobacco Day, World Blood Donor Day, World Hepatitis Day, and World AIDS Day. As part of the United Nations, the World Health Organization supports work towards the Millennium Development Goals.[49] Of the eight Millennium Development Goals, three – reducing child mortality by two-thirds, to reduce maternal deaths by three-quarters, and to halt and begin to reduce the spread of HIV/AIDS – relate directly to WHO's scope; the other five inter-relate and have an impact on world health.[50] Page 15 of 52 Structure The World Health Organization is a member of the United Nations Development Group.[67] Membership Countries by World Health Organization membership status As of 2013, the WHO has 194 member states: all Member States of the United Nations except Liechtenstein, as well as the Cook Islands and Niue.[68] (A state becomes a full member of WHO by ratifying the treaty known as the Constitution of the World Health Organization.) As of 2013, it also had two associate members, Puerto Rico and Tokelau.[69] Several other entities have been granted observer status. Palestine is an observer as a "national liberation movement" recognised by the League of Arab States under United Nations Resolution 3118. The Holy See also attends as an observer, as does the Order of Malta.[70] In 2010, Taiwan was invited under the name of "Chinese Taipei".[71] WHO Member States appoint delegations to the World Health Assembly, WHO's supreme decision-making body. All UN Member States are eligible for WHO membership, and, according to the WHO web site, "other countries may be admitted as members when their application has been approved by a simple majority vote of the World Health Assembly".[68] In addition, the UN observer organizations International Committee of the Red Cross and International Federation of Red Cross and Red Crescent Societies have entered into "official relations" with WHO and are invited as observers. In the World Health Assembly they are seated alongside the other NGOs.[70] Page 16 of 52 International Federation of Red Cross and Red Crescent Societies (IFRC) History The Formation of the IFRC The International Federation of Red Cross and Red Crescent Societies (IFRC) was founded in 1919 in Paris in the aftermath of World War I. The war had shown a need for close cooperation between Red Cross Societies, which, through their humanitarian activities on behalf of prisoners of war and combatants, had attracted millions of volunteers and built a large body of expertise. A devastated Europe could not afford to lose such a resource. It was Henry Davison, president of the American Red Cross War Committee, who proposed forming a federation of these National Societies. An international medical conference initiated by Davison resulted in the birth of the League of Red Cross Societies, which was renamed in October 1983 to the League of Red Cross and Red Crescent Societies, and then in November 1991 to become the International Federation of Red Cross and Red Crescent Societies. The first objective of the IFRC was to improve the health of people in countries that had suffered greatly during the four years of war. Its goals were "to strengthen and unite, for health activities, already-existing Red Cross Societies and to promote the creation of new Societies" There were five founding member Societies: Britain, France, Italy, Japan and the United States. This number has grown over the years and there are now 189 recognized National Societies - one in almost every country in the world. The Birth of an Idea Page 17 of 52 The Red Cross idea was born in 1859, when Henry Dunant, a young Swiss man, came upon the scene of a bloody battle in Solferino, Italy, between the armies of imperial Austria and the Franco-Sardinian alliance. Some 40,000 men lay dead or dying on the battlefield and the wounded were lacking medical attention. Dunant organized local people to bind the soldiers' wounds and to feed and comfort them. On his return, he called for the creation of national relief societies to assist those wounded in war, and pointed the way to the future Geneva Conventions. "Would there not be some means, during a period of peace and calm, of forming relief societies whose object would be to have the wounded cared for in time of war by enthusiastic, devoted volunteers, fully qualified for the task?" he wrote. The Red Cross was born in 1863 when five Geneva men, including Dunant, set up the International Committee for Relief to the Wounded, later to become the International Committee of the Red Cross. Its emblem was a red cross on a white background: the inverse of the Swiss flag. The following year, 12 governments adopted the first Geneva Convention; a milestone in the history of humanity, offering care for the wounded, and defining medical services as "neutral" on the battlefield. 90 years of improving the lives of the most vulnerable The idea of pooling the skills and resources of Red Cross Societies to provide humanitarian assistance in peacetime,and not just to prepare for relief in times of war, goes back to the founder of the Movement, Geneva businessman Henry Dunant. Henry Dunant - the destiny of the Red Cross Jean-Henry Dunant was born on 8 May 1828 in Geneva to a middle-class Calvinist family. His early initiatives included participating in the creation of the Young Men’s Christian Association (YMCA) in 1852 and the World Alliance of YMCAs in 1855. Further details on the history of the International Red Cross and Red Crescent Movement can be found on the Movement's own web site. Our vision and mission Page 18 of 52 The International Federation of Red Cross and Red Crescent Societies (IFRC) is the world's largest humanitarian organization, providing assistance without discrimination as to nationality, race, religious beliefs, class or political opinions. Founded in 1919, the IFRC comprises 189 member Red Cross and Red Crescent National Societies, a secretariat in Geneva and more than 60 delegations strategically located to support activities around the world. There are more societies in formation. The Red Crescent is used in place of the Red Cross in many Islamic countries. The IFRC vision: To inspire, encourage, facilitate and promote at all times all forms of humanitarian activities by National Societies, with a view to preventing and alleviating human suffering, and thereby contributing to the maintenance and promotion of human dignity and peace in the world. The role of the IFRC The IFRC carries out relief operations to assist victims of disasters, and combines this with development work to strengthen the capacities of its member National Societies. The IFRC's Page 19 of 52 work focuses on four core areas: promoting humanitarian values, disaster response, disaster preparedness, and health and community care. Further details of this work can be found in the What we do section. The unique network of National Societies - which cover almost every country in the world - is the IFRC's principal strength. Cooperation between National Societies gives the IFRC greater potential to develop capacities and assist those most in need. At a local level, the network enables the IFRC to reach individual communities. The role of the secretariat in Geneva is to coordinate and mobilize relief assistance for international emergencies, promote cooperation between National Societies and represent these National Societies in the international field. The role of the field delegations is to assist and advise National Societies with relief operations and development programmes, and encourage regional cooperation. The IFRC, together with National Societies and the International Committee of the Red Cross, make up the International Red Cross and Red Crescent Movement. Page 20 of 52 History of poliomyelitis From Wikipedia, the free encyclopedia An Egyptian stele thought to represent a Polio victim. 18th Dynasty (1403 - 1365 BC). Main article: Poliomyelitis The history of poliomyelitis (polio) infections extends into prehistory. Although major polio epidemics were unknown before the 20th century,[1] the disease has caused paralysis and death for much of human history. Over millennia, polio survived quietly as an endemic pathogen until the 1900s when major epidemics began to occur in Europe;[1] soon after, widespread epidemics appeared in the United States. By 1910, frequent epidemics became regular events throughout the developed world, primarily in cities during the summer months. At its peak in the 1940s and 1950s, polio would paralyze or kill over half a million people worldwide every year.[2] The fear and the collective response to these epidemics would give rise to extraordinary public reaction and mobilization; spurring the development of new methods to prevent and treat the disease, and revolutionizing medical philanthropy. Although the development of two polio vaccines has eradicated poliomyelitis in all but four countries, the legacy of poliomyelitis remains, in the development of modern rehabilitation therapy, and in the rise of disability rights movements worldwide. Early history Ancient Egyptian paintings and carvings depict otherwise healthy people with withered limbs, and children walking with canes at a young age.[3] It is theorized that the Roman Emperor Claudius was stricken as a child, and this caused him to walk with a limp for the rest of his life.[4] Perhaps the earliest recorded case of poliomyelitis is that of Sir Walter Scott. In 1773 Scott was said to have developed "a severe teething fever which deprived him of the power of his right Page 21 of 52 leg."[5] At the time, polio was not known to medicine. A retrospective diagnosis of polio is considered to be strong due to the detailed account Scott later made,[6] and the resultant lameness of his left leg had an important effect on his life and writing.[7] The symptoms of poliomyelitis have been described by many names. In the early nineteenth century the disease was known variously as: Dental Paralysis, Infantile Spinal Paralysis, Essential Paralysis of Children, Regressive Paralysis, Myelitis of the Anterior Horns, Tephromyelitis (from the Greek tephros, meaning "ash-gray") and Paralysis of the Morning.[8] In 1789 the first clinical description of poliomyelitis was provided by the British physician Michael Underwood—he refers to polio as "a debility of the lower extremities".[9] The first medical report on poliomyelitis was by Jakob Heine, in 1840; he called the disease Lähmungszustände der unteren Extremitäten.[10] Karl Oskar Medin was the first to empirically study a poliomyelitis epidemic in 1890.[11] This work, and the prior classification by Heine, led to the disease being known as Heine-Medin disease. Epidemics Major polio epidemics were unknown before the 20th century; localized paralytic polio epidemics began to appear in Europe and the United States around 1900.[1] The first report of multiple polio cases was published in 1843 and described an 1841 outbreak in Louisiana. A fiftyyear gap occurs before the next U.S. report—a cluster of 26 cases in Boston in 1893.[1] The first recognized U.S. polio epidemic occurred the following year in Vermont with 132 total cases (18 deaths), including several cases in adults.[11] Numerous epidemics of varying magnitude began to appear throughout the country; by 1907 approximately 2,500 cases of poliomyelitis were reported in New York City.[12] This cardboard placard was placed in windows of residences where patients were quarantined due to poliomyelitis. Violating the quarantine order or removing the placard was punishable by a fine of up to US$100 in 1909. On Saturday, June 17, 1916 an official announcement of the existence of an epidemic polio infection was made in Brooklyn, New York. That year, there were over 27,000 cases and more than 6,000 deaths due to polio in the United States, with over 2,000 deaths in New York City alone.[13] The names and addresses of individuals with confirmed polio cases were published daily in the press, their houses were identified with placards, and their families were quarantined.[14] Dr. Hiram M. Hiller, Jr., was one of the physicians in several cities who realized Page 22 of 52 what they were dealing with, but the nature of the disease remained largely a mystery. The 1916 epidemic caused widespread panic and thousands fled the city to nearby mountain resorts; movie theaters were closed, meetings were canceled, public gatherings were almost nonexistent, and children were warned not to drink from water fountains, and told to avoid amusement parks, swimming pools, and beaches.[13] From 1916 onward, a polio epidemic appeared each summer in at least one part of the country, with the most serious occurring in the 1940s and 1950s.[1] In the epidemic of 1949, 2,720 deaths from the disease occurred in the United States and 42,173 cases were reported and Canada and the United Kingdom were also affected.[15][16] Prior to the 20th century polio infections were rarely seen in infants before 6 months of age and most cases occurred in children 6 months to 4 years of age.[17] Young children who contract polio generally suffer only mild symptoms, but as a result they become permanently immune to the disease.[18] In developed countries during the late 19th and early 20th centuries, improvements were being made in community sanitation, including improved sewage disposal and clean water supplies. Better hygiene meant that infants and young children had fewer opportunities to encounter and develop immunity to polio. Exposure to poliovirus was therefore delayed until late childhood or adult life, when it was more likely to take the paralytic form.[17] In children, paralysis due to polio occurs in 1/1000 cases, while in adults, paralysis occurs in 1/75 cases.[19] By 1950, the peak age incidence of paralytic poliomyelitis in the United States had shifted from infants to children aged 5 to 9 years; about one-third of the cases were reported in persons over 15 years of age.[20] Accordingly, the rate of paralysis and death due to polio infection also increased during this time.[1] In the United States, the 1952 polio epidemic would be the worst outbreak in the nation's history, and is credited with heightening parents’ fears of the disease and focusing public awareness on the need for a vaccine.[21] Of the 57,628 cases reported that year 3,145 died and 21,269 were left with mild to disabling paralysis.[21][22] Historical treatments In the early 20th century—in the absence of proven treatments—a number of odd and potentially dangerous polio treatments were suggested. In John Haven Emerson's A Monograph on the Epidemic of Poliomyelitis (Infantile Paralysis) in New York City in 1916[23] one suggested remedy reads: “ Give oxygen through the lower extremities, by positive electricity. Frequent baths using almond meal, or oxidising the water. Applications of poultices of Roman chamomile, slippery elm, arnica, mustard, cantharis, amygdalae dulcis oil, and of special merit, spikenard oil and Xanthoxolinum. Internally use caffeine, Fl. Kola, dry muriate of quinine, elixir of cinchone, radium water, chloride of gold, liquor calcis and wine of pepsin.[24] ” Following the 1916 epidemics and having experienced little success in treating polio patients, researchers set out to find new and better treatments for the disease. Between 1917 and the early 1950s several therapies were explored in an effort to prevent deformities including hydrotherapy and electrotherapy. In 1935 Claus Jungeblut reported that vitamin C treatment enhanced Page 23 of 52 resistance to poliomyelitis in monkeys.[25] However follow up experiments reported by Albert Sabin and Jungeblut himself were unable to confirm the initially promising results.[26][27] Later, Fred Klenner published his own clinical experience with vitamin C in the treatment of polio,[28][29][30][31] however his work was not well received and no large clinical trials were ever performed. Surgical treatments such as nerve grafting, tendon lengthening, tendon transfers, and limb lengthening and shortening were used extensively during this time.[32][33] Patients with residual paralysis were treated with braces and taught to compensate for lost function with the help of calipers, crutches and wheelchairs. The use of devices such as rigid braces and body casts, which tended to cause muscle atrophy due to the limited movement of the user, were also touted as effective treatments.[34] Massage and passive motion exercises were also used to treat polio victims.[33] Most of these treatments proved to be of little therapeutic value, however several effective supportive measures for the treatment of polio did emerge during these decades including the iron lung, an anti-polio antibody serum, and a treatment regimen developed by Sister Elizabeth Kenny.[35] Iron lung This iron lung was donated to the CDC by the family of Mr. Barton Hebert of Covington, Louisiana, who had used the device from the late 1950s until his death in 2003. The first iron lung used in the treatment of polio victims was invented by Philip Drinker, Louis Agassiz Shaw, and James Wilson at Harvard, and tested October 12, 1928 at Children's Hospital, Boston.[36] The original Drinker iron lung was powered by an electric motor attached to two vacuum cleaners, and worked by changing the pressure inside the machine. When the pressure is lowered, the chest cavity expands, trying to fill this partial vacuum. When the pressure is raised the chest cavity contracts. This expansion and contraction mimics the physiology of normal breathing. The design of the iron lung was subsequently improved by using a bellows attached directly to the machine, and John Haven Emerson modified the design to make production less expensive.[36] The Emerson Iron Lung was produced until 1970.[37] Other respiratory aids, such as the "rocking bed" were used in patients with less critical breathing difficulties.[32] During the polio epidemics, the iron lung saved many thousands of lives, but the machine was large, cumbersome and very expensive:[38] in the 1930s, an iron lung cost about $1,500 - about the same price as the average home.[39] The cost of running the machine was also prohibitive, as patients were encased in the metal chambers for months, years and sometimes for life:[37] even with an iron lung the fatality rate for patients with bulbar polio exceeded 90%.[40] Page 24 of 52 These drawbacks led to the development of more modern positive-pressure ventilators and the use of positive-pressure ventilation by tracheostomy. Positive pressure ventilators reduced mortality in bulbar patients from 90% to 20%.[41] In the Copenhagen epidemic of 1952, large numbers of patients were ventilated by hand ("bagged") by medical students and anyone else on hand, because of the large number of bulbar polio patients and the small number of ventilators available.[42] Passive immunotherapy In 1950 William Hammon at the University of Pittsburgh isolated serum, containing antibodies against poliovirus, from the blood of polio survivors.[35] The serum, Hammon believed, would prevent the spread of polio and to reduce the severity of disease in polio patients.[43] Between September 1951 and July 1952 nearly 55,000 children were involved in a clinical trial of the anti-polio serum.[44] The results of the trial were promising; the serum was shown to be about 80% effective in preventing the development of paralytic poliomyelitis, and protection was shown to last for 5 weeks if given under tightly controlled circumstances.[45] The serum was also shown to reduce the severity of the disease in patients who developed polio.[35] The large-scale use of antibody serum to prevent and treat polio had a number of drawbacks, however, including the observation that the immunity provided by the serum did not last long, and the protection offered by the antibody was incomplete, that re-injection was required during each epidemic outbreak, and that the optimal time frame for administration was unknown.[43] The antibody serum was widely administered, but obtaining the serum was an expensive and timeconsuming process, and the focus of the medical community soon shifted to the development of a polio vaccine.[46] Kenny regimen Early management practices for paralyzed muscles emphasized the need to rest the affected muscles and suggested that the application of splints would prevent tightening of muscle, tendons, ligaments, or skin that would prevent normal movement. Many paralyzed polio patients lay in plaster body casts for months at a time. This prolonged casting often resulted in atrophy of both affected and unaffected muscles.[3] In 1940, Sister Elizabeth Kenny, an Australian bush nurse, arrived in North America and challenged this approach to treatment. In treating polio cases in rural Australia between 1928 and 1940, Kenny had developed a form of physical therapy that - instead of immobilizing afflicted limbs - aimed to relieve pain and spasms in polio patients through the use of hot, moist packs to relieve muscle spasm, and the advocated early activity and exercise to maximize the strength of unaffected muscle fibers and promote the neuroplastic recruitment of remaining nerve cells that had not been killed by the virus.[34] Sister Kenny later settled in Minnesota where she established the Sister Kenny Rehabilitation Institute, beginning a world-wide crusade to advocate her system of treatment. Slowly, Kenny's ideas won acceptance, and by the mid-20th century had become the hallmark for the treatment of paralytic polio.[32] In combination with antispasmodic medications to reduce muscular contractions, Kenny's therapy is still used in the treatment of paralytic poliomyelitis. Page 25 of 52 Vaccine development Main article: Polio vaccine People in Columbus, Georgia awaiting polio vaccination during the early days of the National Polio Immunization Program. In 1935 Maurice Brodie, a research assistant at New York University, attempted to produce a polio vaccine, procured from virus in ground up monkey spinal cords, and killed by formaldehyde. Brodie first tested the vaccine on himself and several of his assistants. He then gave the vaccine to three thousand children. Many developed allergic reactions, but none of the children developed an immunity to polio.[47] During the late 1940s and early 1950s, a research group, headed by John Enders at the Boston Children's Hospital, successfully cultivated the poliovirus in human tissue. This significant breakthrough ultimately allowed for the development of the polio vaccines. Enders and his colleagues, Thomas H. Weller and Frederick C. Robbins, were recognized for their labors with the Nobel Prize in 1954.[48] Two vaccines are used throughout the world to combat polio. The first was developed by Jonas Salk, first tested in 1952, and announced to the world by Salk on April 12, 1955.[46] The Salk vaccine, or inactivated poliovirus vaccine (IPV), consists of an injected dose of killed poliovirus. In 1954, the vaccine was tested for its ability to prevent polio; the field trials involving the Salk vaccine would grow to be the largest medical experiment in history. Immediately following licensing, vaccination campaigns were launched, by 1957, following mass immunizations promoted by the March of Dimes the annual number of polio cases in the United States would be dramatically reduced, from a peak of nearly 58,000 cases, to just 5,600 cases.[11] Eight years after Salk's success, Albert Sabin developed an oral polio vaccine (OPV) using live but weakened (attenuated) virus.[49] Human trials of Sabin's vaccine began in 1957 and it was licensed in 1962. Following the development of oral polio vaccine, a second wave of mass immunizations would lead to a further decline in the number of cases: by 1961, only 161 cases were recorded in the United States.[50] The last cases of paralytic poliomyelitis caused by endemic transmission of poliovirus in the United States were in 1979, when an outbreak occurred among the Amish in several Midwestern states.[51] Legacy Page 26 of 52 Early in the 20th century polio would become the world's most feared disease. The disease hit without warning, tended to strike white, affluent individuals, required long quarantine periods during which parents were separated from children: it was impossible to tell who would get the disease and who would be spared.[11] The consequences of the disease left polio victims marked for life, leaving behind vivid images of wheelchairs, crutches, leg braces, breathing devices, and deformed limbs. However, polio changed not only the lives of those who survived it, but also effected profound cultural changes: the emergence of grassroots fund-raising campaigns that would revolutionize medical philanthropy, the rise of rehabilitation therapy and, through campaigns for the social and civil rights of the disabled, polio survivors helped to spur the modern disability rights movement. In addition, the occurrence of polio epidemics led to a number of public health innovations. One of the most widespread was the proliferation of "no spitting" ordinances in the United States and Rehabilitation therapy A physical therapist assists two polio-stricken children while they exercise their lower limbs. Prior to the polio scares of the 20th century, most rehabilitation therapy was focused on treating injured soldiers returning from war. The crippling effects of polio led to heightened awareness and public support of physical rehabilitation, and in response a number of rehabilitation centers specifically aimed at treating polio patients were opened, with the task of restoring and building the remaining strength of polio victims and teaching new, compensatory skills to large numbers of newly paralyzed individuals.[38] In 1926, Franklin Roosevelt, convinced of the benefits of hydrotherapy, bought a resort at Warm Springs, Georgia, where he founded the first modern rehabilitation center for treatment of polio patients which still operates as the Roosevelt Warm Springs Institute for Rehabilitation.[56] The cost of polio rehabilitation was often more than the average family could afford, and more than 80% of the nation's polio patients would receive funding through the March of Dimes.[53] Some families also received support through philanthropic organizations such as the Ancient Arabic Order of the Nobles of the Mystic Shrine fraternity, which established a network of pediatric hospitals in 1919, the Shriners Hospitals for Children, to provide care free of charge for children with polio.[57] Page 27 of 52 Disability rights movement As thousands of polio survivors with varying degrees of paralysis left the rehabilitation hospitals and went home, to school and to work, many were frustrated by a lack of accessibility and discrimination they experienced in their communities. In the early 20th century the use of a wheelchair at home or out in public was a daunting prospect as no public transportation system accommodated wheelchairs and most public buildings including schools, were inaccessible to those with disabilities. Many children left disabled by polio were forced to attend separate institutions for "crippled children" or had to be carried up and down stairs.[56] As people who had been paralyzed by polio matured, they began to demand the right to participate in the mainstream of society. Polio survivors were often in the forefront of the disability rights movement that emerged in the United States during the 1970s, and pushed legislation such as the Rehabilitation Act of 1973 which protected qualified individuals from discrimination based on their disability, and the Americans with Disabilities Act of 1990.[56][58] Other political movements led by polio survivors include the Independent Living and Universal design movements of the 1960s and 1970s.[59] Polio survivors are one of the largest disabled groups in the world. The World Health Organization estimates that there are 10 to 20 million polio survivors worldwide.[60] In 1977, the National Health Interview Survey reported that there were 254,000 persons living in the United States who had been paralyzed by polio.[61] According to local polio support groups and doctors, some 40,000 polio survivors with varying degrees of paralysis live in Germany, 30,000 in Japan, 24,000 in France, 16,000 in Australia, 12,000 in Canada and 12,000 in the United Kingdom.[60] Page 28 of 52 Sigmund Freud From Wikipedia, the free encyclopedia "Freud" redirects here. For other uses, see Freud (disambiguation). Sigmund Freud Freud by Max Halberstadt, 1921 Sigismund Schlomo Freud 6 May 1856 Freiberg in Mähren, Moravia, Austrian Born Empire (now Příbor, Czech Republic) 23 September 1939 (aged 83) Died London, England Austrian Nationality Fields Institutions Neurology Psychotherapy Psychoanalysis University of Vienna Page 29 of 52 University of Vienna (MD, 1881) Alma mater Academic advisors Psychoanalysis Known for Notable awards Spouse Franz Brentano Ernst Brücke Carl Claus Goethe Prize (1930) Foreign Member of the Royal Society[1] Martha Bernays (m. 1886–1939, his death) Signature Sigmund Freud (/frɔɪd/;[2] German pronunciation: [ˈziːkmʊnt ˈfʁɔʏ̯t]; born Sigismund Schlomo Freud; 6 May 1856 – 23 September 1939) was an Austrian neurologist, psychologist and philosopher, now known as the father of psychoanalysis. Freud qualified as a doctor of medicine at the University of Vienna in 1881,[3] and then carried out research into cerebral palsy, aphasia and microscopic neuroanatomy at the Vienna General Hospital.[4] Upon completing his habilitation in 1895, he was appointed a docent in neuropathology in the same year and became an affiliated professor (professor extraordinarius) in 1902.[5][6] In creating psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst,[7] Freud developed therapeutic techniques such as the use of free association and discovered transference, establishing its central role in the analytic process. Freud's redefinition of sexuality to include its infantile forms led him to formulate the Oedipus complex as the central tenet of psychoanalytical theory. His analysis of dreams as wishfulfillments provided him with models for the clinical analysis of symptom formation and the mechanisms of repression as well as for elaboration of his theory of the unconscious as an agency disruptive of conscious states of mind.[8] Freud postulated the existence of libido, an energy with which mental processes and structures are invested and which generates erotic attachments, and a death drive, the source of repetition, hate, aggression and neurotic guilt.[9] In his later work Freud developed a wide-ranging interpretation and critique of religion and culture. Psychoanalysis remains influential within psychotherapy, within some areas of psychiatry, and across the humanities. As such, it continues to generate extensive and highly contested debate with regard to its therapeutic efficacy, its scientific status, and whether it advances or is detrimental to the feminist cause.[10] Nonetheless, Freud's work has suffused contemporary Western thought and popular culture. In the words of W. H. Auden's poetic tribute, by the time of Freud's death in 1939, he had become "a whole climate of opinion / under whom we conduct our different lives".[11] Page 30 of 52 Biography Development of psychoanalysis André Brouillet's 1887 A Clinical Lesson at the Salpêtrière depicting a Charcot demonstration. Freud had a lithograph of this painting placed over the couch in his consulting rooms.[34] In October 1885, Freud went to Paris on a fellowship to study with Jean-Martin Charcot, a renowned neurologist who was conducting scientific research into hypnosis. He was later to recall the experience of this stay as catalytic in turning him toward the practice of medical psychopathology and away from a less financially promising career in neurology research.[35] Charcot specialized in the study of hysteria and susceptibility to hypnosis, which he frequently demonstrated with patients on stage in front of an audience. Once he had set up in private practice in 1886, Freud began using hypnosis in his clinical work. He adopted the approach of his friend and collaborator, Josef Breuer, in a use of hypnosis which was different from the French methods he had studied in that it did not use suggestion. The treatment of one particular patient of Breuer's proved to be transformative for Freud's clinical practice. Described as Anna O, she was invited to talk about her symptoms while under hypnosis (she would coin the phrase "talking cure" for her treatment). In the course of talking in this way, these symptoms became reduced in severity as she retrieved memories of traumatic incidents associated with their onset. This led Freud to eventually establish in the course of his clinical practice that a more consistent and effective pattern of symptom relief could be achieved, without recourse to hypnosis, by encouraging patients to talk freely about whatever ideas or memories occurred to them. In addition to this procedure, which he called "free association", Freud found that patients' dreams could be fruitfully analyzed to reveal the complex structuring of unconscious material and to demonstrate the psychic action of repression which underlay symptom formation. By 1896, Freud had abandoned hypnosis and was using the term "psychoanalysis" to refer to his new clinical method and the theories on which it was based.[36] Freud's development of these new theories took place during a period in which he experienced heart irregularities, disturbing dreams and periods of depression, a "neurasthenia" which he linked to the death of his father in 1896[37] and which prompted a "self-analysis" of his own dreams and memories of childhood. His explorations of his feelings of hostility to his father and rivalrous jealousy over his mother’s affections led him to a fundamental revision of his theory of the origin of the neuroses. Page 31 of 52 On the basis of his early clinical work, Freud had postulated that unconscious memories of sexual molestation in early childhood were a necessary precondition for the psychoneuroses (hysteria and obsessional neurosis), a formulation now known as Freud's seduction theory.[38] In the light of his self-analysis, Freud abandoned the theory that every neurosis can be traced back to the effects of infantile sexual abuse, now arguing that infantile sexual scenarios still had a causative function, but it did not matter whether they were real or imagined and that in either case they became pathogenic only when acting as repressed memories.[39] This transition from the theory of infantile sexual trauma as a general explanation of how all neuroses originate to one that presupposes an autonomous infantile sexuality provided the basis for Freud's subsequent formulation of the theory of the Oedipus complex.[40] Freud described the evolution of his clinical method and set out his theory of the psychogenetic origins of hysteria, demonstrated in a number of case histories, in Studies on Hysteria published in 1895 (co-authored with Josef Breuer). In 1899 he published The Interpretation of Dreams in which, following a critical review of existing theory, Freud gives detailed interpretations of his own and his patients' dreams in terms of wish-fulfillments made subject to the repression and censorship of the “dream work”. He then sets out the theoretical model of mental structure (the unconscious, pre-conscious and conscious) on which this account is based. An abridged version, On Dreams, was published in 1901. In works which would win him a more general readership, Freud applied his theories outside the clinical setting in The Psychopathology of Everyday Life (1901) and Jokes and their Relation to the Unconscious (1905).[41] In Three Essays on the Theory of Sexuality, published in 1905, Freud elaborates his theory of infantile sexuality, describing its "polymorphous perverse" forms and the functioning of the “drives”, to which it gives rise, in the formation of sexual identity.[42] The same year he published ‘Fragment of an Analysis of a Case of Hysteria (Dora)’ which became one of his more famous and controversial case studies.[43] Ideas Early work Freud began his study of medicine at the University of Vienna in 1873.[93] He took almost nine years to complete his studies, due to his interest in neurophysiological research, specifically investigation of the sexual anatomy of eels and the physiology of the fish nervous system, and because of his interest in studying philosophy with Franz Brentano. He entered private practice in neurology for financial reasons, receiving his M.D. degree in 1881 at the age of 25.[94] Amongst his principal concerns in the 1880s was the anatomy of the brain, specifically the medulla oblongata. He intervened in the important debates about aphasia with his monograph of 1891, Zur Auffassung der Aphasien, in which he coined the term agnosia and counselled against a too locationist view of the explanation of neurological deficits. Like his contemporary Eugen Bleuler, he emphasized brain function rather than brain structure. Freud also an early researcher in the field of cerebral palsy, which was then known as "cerebral paralysis". He published several medical papers on the topic, and showed that the disease existed long before other researchers of the period began to notice and study it. He also suggested that William Little, the man who first identified cerebral palsy, was wrong about lack of oxygen Page 32 of 52 during birth being a cause. Instead, he suggested that complications in birth were only a symptom. Freud hoped that his research would provide a solid scientific basis for his therapeutic technique. The goal of Freudian therapy, or psychoanalysis, was to bring repressed thoughts and feelings into consciousness in order to free the patient from suffering repetitive distorted emotions. Classically, the bringing of unconscious thoughts and feelings to consciousness is brought about by encouraging a patient to talk about dreams and engage in free association, in which patients report their thoughts without reservation and make no attempt to concentrate while doing so.[95] Another important element of psychoanalysis is transference, the process by which patients displace onto their analysts feelings and ideas which derive from previous figures in their lives. Transference was first seen as a regrettable phenomenon that interfered with the recovery of repressed memories and disturbed patients' objectivity, but by 1912, Freud had come to see it as an essential part of the therapeutic process.[96] The origin of Freud's early work with psychoanalysis can be linked to Josef Breuer. Freud credited Breuer with opening the way to the discovery of the psychoanalytical method by his treatment of the case of Anna O. In November 1880, Breuer was called in to treat a highly intelligent 21-year-old woman (Bertha Pappenheim) for a persistent cough that he diagnosed as hysterical. He found that while nursing her dying father, she had developed a number of transitory symptoms, including visual disorders and paralysis and contractures of limbs, which he also diagnosed as hysterical. Breuer began to see his patient almost every day as the symptoms increased and became more persistent, and observed that she entered states of absence. He found that when, with his encouragement, she told fantasy stories in her evening states of absence her condition improved, and most of her symptoms had disappeared by April 1881. Following the death of her father in that month her condition deteriorated again. Breuer recorded that some of the symptoms eventually remitted spontaneously, and that full recovery was achieved by inducing her to recall events that had precipitated the occurrence of a specific symptom.[97] In the years immediately following Breuer's treatment, Anna O. spent three short periods in sanatoria with the diagnosis "hysteria" with "somatic symptoms",[98] and some authors have challenged Breuer's published account of a cure.[99][100][101] Richard Skues rejects this interpretation, which he sees as stemming from both Freudian and anti-psychoanalytical revisionism, that regards both Breuer's narrative of the case as unreliable and his treatment of Anna O. as a failure.[102] The Unconscious Main article: Unconscious mind The concept of the unconscious was central to Freud's account of the mind. Freud believed that while poets and thinkers had long known of the existence of the unconscious, he had ensured that it received scientific recognition in the field of psychology. The concept made an informal appearance in Freud's writings. The unconscious was first introduced in connection with the phenomenon of repression, to explain what happens to ideas that are repressed. Freud stated explicitly that the concept of the Page 33 of 52 unconscious was based on the theory of repression. He postulated a cycle in which ideas are repressed, but remain in the mind, removed from consciousness yet operative, then reappear in consciousness under certain circumstances. The postulate was based upon the investigation of cases of traumatic hysteria, which revealed cases where the behavior of patients could not be explained without reference to ideas or thoughts of which they had no awareness. This fact, combined with the observation that such behavior could be artificially induced by hypnosis, in which ideas were inserted into people's minds, suggested that ideas were operative in the original cases, even though their subjects knew nothing of them. Freud, like Josef Breuer, found the hypothesis that hysterical manifestations were generated by ideas to be not only warranted, but given in observation. Disagreement between them arose when they attempted to give causal explanations of their data: Breuer favored a hypothesis of hypnoid states, while Freud postulated the mechanism of defense. Richard Wollheim comments that given the close correspondence between hysteria and the results of hypnosis, Breuer's hypothesis appears more plausible, and that it is only when repression is taken into account that Freud's hypothesis becomes preferable.[119] Freud originally allowed that repression might be a conscious process, but by the time he wrote his second paper on the "Neuro-Psychoses of Defence" (1896), he apparently believed that repression, which he referred to as "the psychical mechanism of (unconscious) defense", occurred on an unconscious level. Freud further developed his theories about the unconscious in The Interpretation of Dreams (1899) and in Jokes and Their Relation to the Unconscious (1905), where he dealt with condensation and displacement as inherent characteristics of unconscious mental activity. Freud presented his first systematic statement of his hypotheses about unconscious mental processes in 1912, in response to an invitation from the London Society of Psychical Research to contribute to its Proceedings. In 1915, Freud expanded that statement into a more ambitious metapsychological paper, entitled "The Unconscious". In both these papers, when Freud tried to distinguish between his conception of the unconscious and those that predated psychoanalysis, he found it in his postulation of ideas that are simultaneously latent and operative.[119] Dreams Main article: Dream Freud believed that the function of dreams is to preserve sleep by representing as fulfilled wishes that would otherwise awaken the dreamer.[120] In Freud's theory dreams are instigated by the daily occurrences and thoughts of everyday life. His claim that they function as wish fulfillments is based on an account of the “dreamwork” in terms of a transformation of "secondary process" thought, governed by the rules of language and the reality principle, into the "primary process" of unconscious thought governed by the pleasure principle, wish gratification and the repressed sexual scenarios of childhood.[121] In order to preserve sleep the dreamwork disguises the repressed or “latent” content of the dream in an interplay of words and images which Freud describes in terms of condensation, Page 34 of 52 displacement and distortion. This produces the "manifest content" of the dream as recounted in the dream narrative. For Freud an unpleasant manifest content may still represent the fulfilment of a wish on the level of the latent content. In the clinical setting Freud encouraged free association to the dream's manifest content in order to facilitate access to its latent content. Freud believed interpreting dreams in this way could provide important insights into the formation of neurotic symptoms and contribute to the mitigation of their pathological effects.[122] Id, ego and super-ego Main article: Id, ego and super-ego Freud proposed that the human psyche could be divided into three parts: Id, ego and super-ego. Freud discussed this model in the 1920 essay Beyond the Pleasure Principle, and fully elaborated upon it in The Ego and the Id (1923), in which he developed it as an alternative to his previous topographic schema (i.e., conscious, unconscious and preconscious). The id is the completely unconscious, impulsive, childlike portion of the psyche that operates on the "pleasure principle" and is the source of basic impulses and drives; it seeks immediate pleasure and gratification.[127] Freud acknowledged that his use of the term Id (das Es, "the It") derives from the writings of Georg Groddeck.[128] The super-ego is the moral component of the psyche, which takes into account no special circumstances in which the morally right thing may not be right for a given situation. The rational ego attempts to exact a balance between the impractical hedonism of the id and the equally impractical moralism of the super-ego; it is the part of the psyche that is usually reflected most directly in a person's actions. When overburdened or threatened by its tasks, it may employ defence mechanisms including denial, repression, undoing, rationalization, and displacement. This concept is usually represented by the "Iceberg Model".[129] This model represents the roles the Id, Ego, and Super Ego play in relation to conscious and unconscious thought. Freud compared the relationship between the ego and the id to that between a charioteer and his horses: the horses provide the energy and drive, while the charioteer provides direction.[127] Life and death drives[edit] Main articles: Libido and Death drive Freud believed that people are driven by two conflicting central desires: the life drive (libido or Eros) (survival, propagation, hunger, thirst, and sex) and the death drive. The death drive was also termed "Thanatos", although Freud did not use that term; "Thanatos" was introduced in this context by Paul Federn.[130] Freud hypothesized that libido is a form of mental energy with which processes, structures and object-representations are invested.[131] Prior to the war, Freud believes, fiction had constituted a different mode of relation to death, a place of compensation in which "the condition for reconciling ourselves to death is fulfilled, namely, if beneath all vicissitudes of life a permanent life still remains to us".[132] Page 35 of 52 In Beyond the Pleasure Principle, Freud inferred the existence of the death instinct. Its premise was a regulatory principle that has been described as "the principle of psychic inertia", "the Nirvana principle", and "the conservatism of instinct". Its background was Freud's earlier Project for a Scientific Psychology, where he had defined the principle governing the mental apparatus as its tendency to divest itself of quantity or to reduce tension to zero. Freud had been obliged to abandon that definition, since it proved adequate only to the most rudimentary kinds of mental functioning, and replaced the idea that the apparatus tends toward a level of zero tension with the idea that it tends toward a minimum level of tension.[133] Freud in effect readopted the original definition in Beyond the Pleasure Principle, this time applying it to a different principle. He asserted that on certain occasions the mind acts as though it could eliminate tension entirely, or in effect to reduce itself to a state of extinction; his key evidence for this was the existence of the compulsion to repeat. Examples of such repetition included the dream life of traumatic neurotics and children's play. In the phenomenon of repetition, Freud saw a psychic trend to work over earlier impressions, to master them and derive pleasure from them, a trend was prior to the pleasure principle but not opposed to it. In addition to that trend, there was also a principle at work that was opposed to, and thus "beyond" the pleasure principle. If repetition is a necessary element in the binding of energy or adaptation, when carried to inordinate lengths it becomes a means of abandoning adaptations and reinstating earlier or less evolved psychic positions. By combining this idea with the hypothesis that all repetition is a form of discharge, Freud reached the conclusion that the compulsion to repeat is an effort to restore a state that is both historically primitive and marked by the total draining of energy: death.[133] Legacy Psychotherapy Though not the first methodology in the practice of individual verbal psychotherapy,[150] Freud's psychoanalytic system came to dominate the field from early in the twentieth century, forming the basis for many later variants. While these systems have adopted different theories and techniques, all have followed Freud by attempting to effect behavioral change through having patients talk about their difficulties.[7] Psychoanalysis itself has, according to psychoanalyst Joel Kovel, declined as a distinct therapeutic practice, despite its pervasive influence on psychotherapy.[151] Science Research projects designed to test Freud's theories empirically have led to a vast literature on the topic.[156] Seymour Fisher and Roger P. Greenberg concluded in 1977 that some of Freud's concepts were supported by empirical evidence. Their analysis of research literature supported Freud's concepts of oral and anal personality constellations, his account of the role of Oedipal factors in certain aspects of male personality functioning, his formulations about the relatively greater concern about loss of love in women's as compared to men's personality economy, and his views about the instigating effects of homosexual anxieties on the formation of paranoid delusions. They also found limited and equivocal support for Freud's theories about the Page 36 of 52 development of homosexuality. They found that several of Freud's other theories, including his portrayal of dreams as primarily containers of secret, unconscious wishes, as well as some of his views about the psychodynamics of women, were either not supported or contradicted by research. Reviewing the issues again in 1996, they concluded that much experimental data relevant to Freud's work exists, and supports some of his major ideas and theories.[157] Fisher and Greenberg's similar conclusions in their more extensive earlier volume on experimental studies[158] have been strongly criticised for alleged methodological deficiencies by Paul Kline, who writes that they "accept results at their face value with almost no consideration of methodological adequacy",[159] and by Edward Erwin.[160] Philosophy Psychoanalysis has been interpreted as both radical and conservative. By the 1940s, it had come to be seen as conservative by the European and American intellectual community. Critics outside the psychoanalytic movement, whether on the political left or right, saw Freud as a conservative. Fromm had argued that several aspects of psychoanalytic theory served the interests of political reaction in his The Fear of Freedom (1942), an assessment confirmed by sympathetic writers on the right. Philip Rieff's Freud: The Mind of the Moralist (1959) portrayed Freud as a man who urged men to make the best of an inevitably unhappy fate, and admirable for that reason. Three books published in the 1950s challenged the then prevailing interpretation of Freud as a conservative: Herbert Marcuse's Eros and Civilization (1955), Lionel Trilling's Freud and the Crisis of Our Culture, and Norman O. Brown's Life Against Death (1959).[182] Eros and Civilization helped make the idea that Freud and Marx were addressing similar questions from different perspectives credible to the left. Marcuse criticized neo-Freudian revisionism for discarding seemingly pessimistic theories such as the death instinct, arguing that they could be turned in a utopian direction. Freud's theories also influenced the Frankfurt School and critical theory as a whole.[183] Freud has been compared to Marx by Reich, who saw Freud's importance for psychiatry as parallel to that of Marx for economics,[184] and by Paul Robinson, who sees Freud as a revolutionary whose contributions to twentieth century thought are comparable in importance to Marx's contributions to nineteenth century thought.[185] Fromm calls Freud, Marx and Einstein the "architects of the modern age", but rejects the idea that Marx and Freud were equally significant, arguing that Marx was both far more historically important and a finer thinker. Fromm nevertheless credits Freud with permanently changing the way human nature is understood.[186] Gilles Deleuze and Félix Guattari write in Anti-Oedipus (1972) that psychoanalysis resembles the Russian Revolution in that it became corrupted almost from the beginning. They believe this began with Freud's development of the theory of the Oedipus complex, which they see as idealist.[187] Jean-Paul Sartre critiques Freud's theory of the unconscious in Being and Nothingness, claiming that consciousness is essentially self-conscious. Sartre also attempts to adapt some of Freud's ideas to his own account of human life, and thereby develop an "existential psychoanalysis" in which causal categories are replaced by teleological categories.[188] Maurice Merleau-Ponty considers Freud to be one of the anticipators of phenomenology,[189] while Theodor W. Adorno considers Edmund Husserl, the founder of phenomenology, to be Freud's philosophical opposite, Page 37 of 52 writing that Husserl's polemic against psychologism could have been directed against psychoanalysis.[190] Paul Ricœur sees Freud as a master of the "school of suspicion", alongside Marx and Nietzsche.[191] Ricœur and Jürgen Habermas have helped create a "hermeneutic version of Freud", one which "claimed him as the most significant progenitor of the shift from an objectifying, empiricist understanding of the human realm to one stressing subjectivity and interpretation."[192] Louis Althusser drew on Freud's concept of overdetermination for his reinterpretation of Marx's Capital.[193] Jean-François Lyotard developed a theory of the unconscious that reverses Freud's account of the dream-work: for Lyotard, the unconscious is a force whose intensity is manifest via disfiguration rather than condensation.[194] Jacques Derrida finds Freud to be both a late figure in the history of western metaphysics and, with Nietzsche and Heidegger, a precursor of his own brand of radicalism.[195] Several scholars see Freud as parallel to Plato, writing that they hold nearly the same theory of dreams and have similar theories of the tripartite structure of the human soul or personality, even if the hierarchy between the parts of the soul is almost reversed.[196][197] Ernest Gellner argues that Freud's theories are an inversion of Plato's. Whereas Plato saw a hierarchy inherent in the nature of reality, and relied upon it to validate norms, Freud was a naturalist who could not follow such an approach. Both men's theories drew a parallel between the structure of the human mind and that of society, but while Plato wanted to strengthen the super-ego, which corresponded to the aristocracy, Freud wanted to strengthen the ego, which corresponded to the middle class.[198] Michel Foucault writes that Plato and Freud meant different things when they claimed that dreams fulfill desires, since the meaning of a statement depends on its relation to other propositions.[199] Paul Vitz compares Freudian psychoanalysis to Thomism, noting St. Thomas's belief in the existence of an "unconscious consciousness" and his "frequent use of the word and concept 'libido' - sometimes in a more specific sense than Freud, but always in a manner in agreement with the Freudian use." Vitz suggests that Freud may have been unaware that his theory of the unconscious was reminiscent of Aquinas.[27] Bernard Williams writes that there has been hope that some psychoanalytical theories may "support some ethical conception as a necessary part of human happiness", but that in some cases the theories appear to support such hopes because they themselves involve ethical thought. In his view, while such theories may be better as channels of individual help because of their ethical basis, it disqualifies them from providing a basis for ethics.[200] Escape from Nazism[edit] In 1930 Freud was awarded the Goethe Prize in recognition of his contributions to psychology and to German literary culture. In January 1933, the Nazis took control of Germany, and Freud's books were prominent among those they burned and destroyed. Freud quipped: "What progress we are making. In the Middle Ages they would have burned me. Now, they are content with burning my books."[79] Freud continued to maintain his optimistic underestimation of the growing Nazi threat and remained determined to stay in Vienna, even following the Anschluss of 13 March 1938 in which Nazi Germany annexed Austria, and the outbursts of violent anti-Semitism that ensued.[80] Page 38 of 52 Ernest Jones, the then president of the International Psychoanalytic Association (IPA), flew into Vienna from London via Prague on 15 March determined to get Freud to change his mind and seek exile in Britain. This prospect and the shock of the detention and interrogation of Anna Freud by the Gestapo finally convinced Freud it was time to leave Austria.[80] Jones left for London the following week with a list provided by Freud of the party of émigrés for whom immigration permits would be required. Back in London, Jones used his personal acquaintance with the Home Secretary, Sir Samuel Hoare to expedite the granting of permits. There were seventeen in all and work permits were provided where relevant. Jones also used his influence in scientific circles, persuading the president of the Royal Society,[1] Sir William Bragg, to write to the Foreign Secretary Lord Halifax, requesting to good effect that diplomatic pressure be applied in Berlin and Vienna on Freud's behalf. Freud also had support from American diplomats, notably his ex-patient and American ambassador to France, William Bullitt.[81] The departure from Vienna began in stages throughout April and May 1938. Freud's grandson Ernst Halberstadt and Freud's son Martin's wife and children left for Paris in April. Freud's sisterin-law, Minna Bernays, left for London on 5 May, Martin Freud the following week and Freud's daughter Mathilde and her husband, Robert Hollitscher, on 24 May.[82] By the end of the month, arrangements for Freud's own departure for London had become stalled, mired in a legally tortuous and financially extortionate process of negotiation with the Nazi authorities. The Nazi-appointed Kommissar put in charge of his assets and those of the IPA proved to be sympathetic to Freud's plight. Anton Sauerwald had studied chemistry at Vienna University under Professor Josef Herzig, an old friend of Freud's, and evidently retained, notwithstanding his Nazi Party allegiance, a respect for Freud's professional standing. Expected to disclose details of all Freud's bank accounts to his superiors and to follow their instructions to destroy the historic library of books housed in the offices of the IPA, in the event Sauerwald did neither, removing evidence of Freud's foreign bank accounts to his own safe-keeping and arranging the storage of the IPA library in the Austrian National Library where they remained until the end of the war.[83] Though Sauerwald's intervention lessened the financial burden of the "flight" tax on Freud's declared assets, other substantial charges were levied in relation to the debts of the IPA and the valuable collection of antiquities Freud possessed. Unable to access his own accounts, Freud turned to Princess Marie Bonaparte, the most eminent and wealthy of his French followers, who had travelled to Vienna to offer her support and it was she who made the necessary funds available.[84] This allowed Sauerwald to sign the necessary exit visas for Freud, his wife Martha and daughter Anna. They left Vienna on the Orient Express on 4 June, accompanied by their household staff and a doctor, arriving in Paris the following day where they stayed as guests of Princess Bonaparte before travelling overnight to London arriving at Victoria Station on 6 June. Many famous names were soon to call on Freud to pay their respects, notably Salvador Dalí, Stefan Zweig, Leonard Woolf, Virginia Woolf and H.G. Wells. Representatives of the Royal Society[1] called with the Society's Charter for Freud to sign himself into membership. Princess Bonaparte arrived towards the end of June to discuss the fate of Freud's four elderly sisters left behind in Vienna. Her subsequent attempts to get them exit visas failed and they would all die in Nazi concentration camps.[85] Page 39 of 52 In early 1939 Anton Sauerwald arrived to see Freud, ostensibly to discuss matters relating to the assets of the IPA. He was able to do Freud one last favour. He returned to Vienna to drive Freud's Viennese cancer specialist, Hans Pichler, to London to operate on the worsening condition of Freud's cancerous jaw.[86] Sauerwald was tried and imprisoned in 1945 by an Austrian court for his activities as a Nazi Party official. Responding to a plea from his wife, Anna Freud wrote to confirm that Sauerwald "used his office as our appointed commissar in such a manner as to protect my father". Her intervention helped secure his release from jail in 1947.[87] In the Freuds' new home – 20 Maresfield Gardens, Hampstead, North London – Freud's Vienna consulting room was recreated in faithful detail. He continued to see patients there until the terminal stages of his illness. He also worked on his last books, Moses and Monotheism, published in German in 1938 and in English the following year[88] and the uncompleted Outline of Psychoanalysis which was published posthumously. Page 40 of 52 Carl Jung From Wikipedia, the free encyclopedia Carl Jung Jung in 1910 Carl Gustav Jung Born 26 July 1875 Kesswil, Thurgau, Switzerland Died Residence 6 June 1961 (aged 85) Küsnacht, Zürich, Switzerland Switzerland Page 41 of 52 Citizenship Swiss Nationality Swiss Fields Institutions Alma mater Doctoral advisor Psychiatry, psychology, psychotherapy, analytical psychology Burghölzli, Swiss Army (as a commissioned officer in World War I) University of Basel Eugen Bleuler Analytical psychology, typology, the Known for collective unconscious, the psychoanalytical complex, the archetype, anima and animus, synchronicity Influences Influenced Spouse Eugen Bleuler, Freud, Nietzsche,[1] Schopenhauer,[1] Joseph Campbell, Hermann Hesse, Erich Neumann, Ross Nichols Emma Jung Signature Carl Gustav Jung (/jʊŋ/; German: [ˈkarl ˈɡʊstaf jʊŋ]; 26 July 1875 – 6 June 1961), often referred to as C. G. Jung, was a Swiss psychiatrist and psychotherapist who founded analytical psychology.[2] His work has been influential not only in psychiatry but also in philosophy, Page 42 of 52 anthropology, archaeology, literature, and religious studies. He was a prolific writer, though many of his works were not published until after his death. The central concept of analytical psychology is individuation—the psychological process of integrating the opposites, including the conscious with the unconscious, while still maintaining their relative autonomy.[3] Jung considered individuation to be the central process of human development.[4] Jung created some of the best known psychological concepts, including the archetype, the collective unconscious, the complex, and extraversion and introversion. The Myers-Briggs Type Indicator (MBTI), a popular psychometric instrument, and the concepts of socionics were developed from Jung's theory of psychological types. Jung saw the human psyche as "by nature religious"[5] and made this religiousness the focus of his explorations.[6] Jung is one of the best known contemporary contributors to dream analysis and symbolization. Though he was a practising clinician and considered himself to be a scientist,[7] much of his life's work was spent exploring tangential areas such as Eastern and Western philosophy, alchemy, astrology, and sociology, as well as literature and the arts. Jung's interest in philosophy and the occult led many to view him as a mystic, although his ambition was to be seen as a man of science.[7] His influence on popular psychology, the "psychologization of religion",[8] spirituality and the New Age movement has been immense.[9] University studies and early career Jung did not plan to study psychiatry since it was not considered prestigious at the time. But, studying a psychiatric textbook, he became very excited when he discovered that psychoses are personality diseases. His interest was immediately captured—it combined the biological and the spiritual, exactly what he was searching for.[20] In 1895 Jung studied medicine at the University of Basel. In 1900 Jung began working at the Burghölzli psychiatric hospital in Zürich with Eugen Bleuler. Bleuler was already in communication with the Austrian neurologist Sigmund Freud. Jung's dissertation, published in 1903, was titled On the Psychology and Pathology of So-Called Occult Phenomena. In 1906 he published Studies in Word Association, and later sent a copy of this book to Freud. Eventually a close friendship and a strong professional association developed between the elder Freud and Jung, which left a sizeable trove of correspondence. For six years they cooperated in their work.[21] In 1912, however, Jung published Wandlungen und Symbole der Libido (known in English as Psychology of the Unconscious), which made manifest the developing theoretical divergence between the two. Consequently their personal and professional relationship fractured—each stating that the other was unable to admit he could possibly be wrong. After the culminating break in 1913, Jung went through a difficult and pivotal psychological Page 43 of 52 transformation, exacerbated by the outbreak of the First World War. Henri Ellenberger called Jung's intense experience a "creative illness" and compared it favorably to Freud's own period of what he called neurasthenia and hysteria. Wartime army service During World War I Jung was drafted as an army doctor and soon made commandant of an internment camp for British officers and soldiers (Swiss neutrality obliged the Swiss to intern personnel from either side of the conflict who crossed their frontier to evade capture). Jung worked to improve the conditions of soldiers stranded in neutral territory and encouraged them to attend university courses.[22] Freud Meeting Freud Jung was thirty when he sent his Studies in Word Association to Sigmund Freud in Vienna in 1906. The two men met for the first time the following year and Jung recalled the discussion between himself and Freud as interminable. He recalled that they talked almost unceasingly for thirteen hours.[25] Six months later the then 50-year-old Freud sent a collection of his latest published essays to Jung in Zurich. This marked the beginning of an intense correspondence and collaboration that lasted six years and ended in May 1913.[citation needed] At this time Jung resigned as the chairman of the International Psychoanalytical Association where he had been elected with Freud's support. Jung and Freud influenced each other during the intellectually formative years of Jung's life. Freud called Jung "his adopted eldest son, his crown prince and successor". In 1906 psychology as a science was still in its early stages. Jung, who had become interested in psychiatry as a student by reading Psychopathia Sexualis by Richard von Krafft-Ebing, a professor in Vienna, by then worked as a doctor under the psychiatrist Eugen Bleuler in Burghölzli. He became familiar with Freud's idea of the unconscious through reading Freud's The Interpretation of Dreams (1899). He became a proponent of the new "psycho-analysis." At the time, Freud needed collaborators and pupils to validate and spread his ideas. Burghölzli was a renowned psychiatric clinic in Zurich and Jung's research had already gained him international recognition. Jung de-emphasized the importance of sexual development and focused on the collective unconscious: the part of unconscious that contains memories and ideas that he believed were inherited from ancestors. While he did think that libido was an important source for personal growth, unlike Freud, Jung did not believe that libido alone was responsible for the formation of the core personality.[26] Last meetings with Freud In November 1912, Jung and Freud met in Munich for a meeting among prominent colleagues to discuss psychoanalytical journals.[28] At a talk about a new psychoanalytic essay on Amenhotep Page 44 of 52 IV, Jung expressed his views on how it related to actual conflicts in the psychoanalytic movement. While Jung spoke, Freud suddenly fainted and Jung carried him to a couch. Jung and Freud personally met for the last time in September 1913 for the Fourth International Psychoanalytical Congress in Munich. Jung gave a talk on psychological types, the introverted and extraverted type in analytical psychology. This constituted the introduction of some of the key concepts which came to distinguish Jung's work from Freud's in the next half century. Midlife isolation Carl Gustav Jung Isolation[edit] It was the publication of Jung's book "Psychology of the Unconscious" in 1912, that led to the break with Freud. Letters they exchanged show Freud's refusal to consider Jung's ideas. This rejection caused what Jung described in his autobiography Memories, Dreams, Reflections, as a "resounding censure." Everyone he knew dropped away except for two of his colleagues. Jung described his book as "... an attempt, only partially successful, to create a wider setting for medical psychology and to bring the whole of the psychic phenomena within its purview." (The book was later revised and retitled, "Symbols of Transformation", in 1922). Red Book In 1913, at the age of thirty-eight, Jung experienced a horrible "confrontation with the unconscious". He saw visions and heard voices. He worried at times that he was "menaced by a psychosis" or was "doing a schizophrenia". He decided that it was valuable experience and, in private, he induced hallucinations or, in his words, "active imaginations". He recorded Page 45 of 52 everything he felt in small journals. Jung began to transcribe his notes into a large red leatherbound book, on which he worked intermittently for sixteen years.[13] Jung left no posthumous instructions about the final disposition of what he called the "Red Book". His family eventually moved it into a bank vault in 1984. Sonu Shamdasani, a historian from London, for three years tried to persuade Jung's heirs to have it published, to which they declined every hint of inquiry. As of mid-September 2009, fewer than two dozen people had seen it. Ulrich Hoerni, Jung's grandson who manages the Jung archives, decided to publish it to raise the additional funds needed when the Philemon Foundation was founded.[13] In 2007, two technicians for DigitalFusion, working with the publisher, W. W. Norton & Company, scanned the manuscript with a 10,200-pixel scanner. It was published on 7 October 2009, in German with "separate English translation along with Shamdasani's introduction and footnotes" at the back of the book, according to Sara Corbett for The New York Times. She wrote, "The book is bombastic, baroque and like so much else about Carl Jung, a willful oddity, synched with an antediluvian and mystical reality."[13] The Rubin Museum of Art in New York City displayed the original Red Book journal, as well as some of Jung's original small journals, from 7 October 2009, to 15 February 2010.[30] According to them, "During the period in which he worked on this book Jung developed his principal theories of archetypes, collective unconscious, and the process of individuation." Two-thirds of the pages bear Jung's illuminations of the text.[30] Theories His theories include: The concept of introversion and extraversion (although he did not define these terms as they are popularly defined today).[39] The concept of the complex, a grouping of interrelated unconscious elements. The concept of the collective unconscious, the primordial realm of archetypes, which manifests in all people. Synchronicity as a mode of relationship that is not causal, an idea that has influenced Wolfgang Pauli (with whom he developed the notion of unus mundus in connection with the notion of non-locality) and some other physicists.[40] Introversion and extraversion In Jung’s Psychological Types, he theorizes that each person falls into one of two categories, the introvert and the extravert. These two psychological types Jung compares to the ancient archetypes, Apollo and Dionysus. The introvert is likened with Apollo, who shines light on understanding. The introvert is focused on the internal world of reflection, dreaming and vision. Thoughtful and insightful, the introvert can sometime be uninterested in joining the activities of others. Page 46 of 52 The extravert is associated with Dionysus, interested in joining the activities of the world. The extravert is focused on the outside world of objects, sensory perception and action. Energetic and lively, the extrovert may lose their sense of self in the intoxication of Dionysian pursuits.[41] Divergence from Freud Jung's primary disagreement with Freud stemmed from their differing concepts of the unconscious.[42] Jung saw Freud's theory of the unconscious as incomplete and unnecessarily negative. According to Jung, Freud conceived the unconscious solely as a repository of repressed emotions and desires. Jung agreed with Freud's model of the unconscious, what Jung called the "personal unconscious", but he also proposed the existence of a second, deeper form of the unconscious underlying the personal one. This was the collective unconscious, where the archetypes themselves resided, represented in mythology by a lake or other body of water, and in some cases a jug or other container. Freud had actually mentioned a collective level of psychic functioning but saw it primarily as an appendix to the rest of the psyche. Individuation Jung considered individuation, a psychological process of integrating the opposites including the conscious with the unconscious while still maintaining their relative autonomy, necessary for a person to become whole.[4] Individuation is a process of transformation whereby the personal and collective unconscious is brought into consciousness (by means of dreams, active imagination or free association to take some examples) to be assimilated into the whole personality. It is a completely natural process necessary for the integration of the psyche to take place.[43] Besides achieving physical and mental health,[43] people who have advanced towards individuation tend to be harmonious, mature and responsible. They embody humane values such as freedom and justice and have a good understanding about the workings of human nature and the universe.[4] Persona In his psychological theory – which is not necessarily linked to a particular theory of social structure – the persona appears as a consciously created personality or identity fashioned out of part of the collective psyche through socialization, acculturation and experience.[44] Jung applied the term persona, explicitly because, in Latin, it means both personality and the masks worn by Roman actors of the classical period, expressive of the individual roles played. The persona, he argues, is a mask for the "collective psyche", a mask that 'pretends' individuality, so that both self and others believe in that identity, even if it is really no more than a well-played role through which the collective psyche is expressed. Jung regarded the "personamask" as a complicated system which mediates between individual consciousness and the social community: it is "a compromise between the individual and society as to what a man should Page 47 of 52 appear to be".[45] But he also makes it quite explicit that it is, in substance, a character mask in the classical sense known to theatre, with its double function: both intended to make a certain impression to others, and to hide (part of) the true nature of the individual.[46] The therapist then aims to assist the individuation process through which the client (re-)gains his "own self" – by liberating the self, both from the deceptive cover of the persona, and from the power of unconscious impulses. Jung's theory has become enormously influential in management theory; not just because managers and executives have to create an appropriate "management persona" (a corporate mask) and a persuasive identity,[47] but also because they have to evaluate what sort of people the workers are, in order to manage them (for example, using personality tests and peer reviews).[48] Spirituality Jung's work on himself and his patients convinced him that life has a spiritual purpose beyond material goals. Our main task, he believed, is to discover and fulfill our deep innate potential. Based on his study of Christianity, Hinduism, Buddhism, Gnosticism, Taoism, and other traditions, Jung believed that this journey of transformation, which he called individuation, is at the mystical heart of all religions. It is a journey to meet the self and at the same time to meet the Divine. Unlike Freud's objectivist worldview, Jung's pantheism may have led him to believe that spiritual experience was essential to our well-being, as he specifically identifies individual human life with the universe as a whole.[49][50] Jung's ideas on religion gave a counterbalance to the Freudian scepticism of religion. Jung's idea of religion as a practical road to individuation has been quite popular, and is still treated in modern textbooks on the psychology of religion, though his ideas have also been criticized.[51] Alchemy The work and writings of Jung from the 1940s onwards focused on alchemy. In 1944 Jung published Psychology and Alchemy, where he analyzed the alchemical symbols and showed a direct relationship to the psychoanalytical process.[b] He argued that the alchemical process was the transformation of the impure soul (lead) to perfected soul (gold), and a metaphor for the individuation process.[20] In 1963 Mysterium Coniunctionis first appeared in English as part of in The Collected Works of C. G. Jung. Mysterium Coniunctionis was Jung's last book and focused on the "Mysterium Coniunctionis" archetype, known as the sacred marriage between sun and moon. Jung argued that the stages of the alchemists, the blackening, the whitening, the reddening and the yellowing, could be taken as symbolic of individuation — his favourite term for personal growth (75). Alcoholics Anonymous Jung recommended spirituality as a cure for alcoholism and he is considered to have had an indirect role in establishing Alcoholics Anonymous.[52] Jung once treated an American patient Page 48 of 52 (Rowland Hazard III), suffering from chronic alcoholism. After working with the patient for some time and achieving no significant progress, Jung told the man that his alcoholic condition was near to hopeless, save only the possibility of a spiritual experience. Jung noted that occasionally such experiences had been known to reform alcoholics where all else had failed. Hazard took Jung's advice seriously and set about seeking a personal spiritual experience. He returned home to the United States and joined a First-Century Christian evangelical movement known as the Oxford Group (later known as Moral Re-Armament). He also told other alcoholics what Jung had told him about the importance of a spiritual experience. One of the alcoholics he brought into the Oxford Group was Ebby Thacher, a long-time friend and drinking buddy of Bill Wilson, later co-founder of Alcoholics Anonymous (AA). Thacher told Wilson about the Oxford Group, and through them Wilson became aware of Hazard's experience with Jung. The influence of Jung thus indirectly found its way into the formation of Alcoholics Anonymous, the original twelve-step program, and from there into the whole twelve-step recovery movement, although AA as a whole is not Jungian and Jung had no role in the formation of that approach or the twelve steps. The above claims are documented in the letters of Jung and Bill Wilson, excerpts of which can be found in Pass It On, published by Alcoholics Anonymous.[53] Although the detail of this story is disputed by some historians, Jung himself discussed an Oxford Group member, who may have been the same person, in talks given around 1940. The remarks were distributed privately in transcript form, from shorthand taken by an attender (Jung reportedly approved the transcript), and later recorded in Volume 18 of his Collected Works, The Symbolic Life ("For instance, when a member of the Oxford Group comes to me in order to get treatment, I say, 'You are in the Oxford Group; so long as you are there, you settle your affair with the Oxford Group. I can't do it better than Jesus.'" Jung goes on to state that he has seen similar cures among Roman Catholics).[54] Art therapy Jung proposed that art can be used to alleviate or contain feelings of trauma, fear, or anxiety and also to repair, restore and heal.[16] In his work with patients and in his own personal explorations, Jung wrote that art expression and images found in dreams could be helpful in recovering from trauma and emotional distress. He often drew, painted, or made objects and constructions at times of emotional distress, which he recognized as more than recreational.[16] Political views Views on the state Jung stressed the importance of individual rights in a person's relation to the state and society. He saw that the state was treated as "a quasi-animate personality from whom everything is expected" but that this personality was "only camouflage for those individuals who know how to manipulate it",[55] and referred to the state as a form of slavery.[56][57][58][59] He also thought that the state "swallowed up [people's] religious forces",[60] and therefore that the state had "taken the Page 49 of 52 place of God"—making it comparable to a religion in which "state slavery is a form of worship".[58] Jung observed that "stage acts of [the] state" are comparable to religious displays: "Brass bands, flags, banners, parades and monster demonstrations are no different in principle from ecclesiastical processions, cannonades and fire to scare off demons".[61] From Jung's perspective, this replacement of God with the state in a mass society led to the dislocation of the religious drive and resulted in the same fanaticism of the church-states of the Dark Ages— wherein the more the state is 'worshipped', the more freedom and morality are suppressed;[62] this ultimately leaves the individual psychically undeveloped with extreme feelings of marginalization.[63] Germany, 1933 to 1939 Jung had many friends and respected colleagues who were Jewish and he maintained relations with them through the 1930s when anti-semitism in Germany and other European nations was on the rise. However, until 1939, he also maintained professional relations with psychotherapists in Germany who had declared their support for the Nazi regime and there were allegations that he himself was a Nazi sympathizer. In 1933, after the Nazis gained power in Germany, Jung took part in restructuring of the General Medical Society for Psychotherapy (Allgemeine Ärztliche Gesellschaft für Psychotherapie), a German-based professional body with an international membership. The society was reorganized into two distinct bodies: 1. A strictly German body, the Deutsche Allgemeine Ärztliche Gesellschaft für Psychotherapie, led by Matthias Göring, an Adlerian psychotherapist[64] and a cousin of the prominent Nazi Hermann Göring; 2. International General Medical Society for Psychotherapy, led by Jung. The German body was to be affiliated to the international society, as were new national societies being set up in Switzerland and elsewhere.[65] C. G. Jung Institute, Küsnacht, Switzerland The International Society's constitution permitted individual doctors to join it directly, rather than through one of the national affiliated societies, a provision to which Jung drew attention in a circular in 1934.[66] This implied that German Jewish doctors could maintain their professional status as individual members of the international body, even though they were excluded from the German affiliate, as well as from other German medical societies operating under the Nazis.[67] Page 50 of 52 As leader of the international body, Jung assumed overall responsibility for its publication, the Zentralblatt für Psychotherapie. In 1933, this journal published a statement endorsing Nazi positions[68] and Hitler's book Mein Kampf.[69] In 1934, Jung wrote in a Swiss publication, the Neue Zürcher Zeitung, that he experienced "great surprise and disappointment"[70] when the Zentralblatt associated his name with the pro-Nazi statement. Jung went on to say "the main point is to get a young and insecure science into a place of safety during an earthquake".[71] He did not end his relationship with the Zentralblatt at this time, but he did arrange the appointment of a new managing editor, Carl Alfred Meier of Switzerland. For the next few years, the Zentralblatt under Jung and Meier maintained a position distinct from that of the Nazis, in that it continued to acknowledge contributions of Jewish doctors to psychotherapy.[72] In the face of energetic German attempts to Nazify the international body, Jung resigned from its presidency in 1939,[72] the year the Second World War started. Response to Nazism Jung's interest in European mythology and folk psychology has led to accusations of Nazi sympathies, since they shared the same interest.[73][74][75] He became, however, aware of the negative impact of these similarities: Jung clearly identifies himself with the spirit of German Volkstumsbewegung throughout this period and well into the 1920s and 1930s, until the horrors of Nazism finally compelled him to reframe these neopagan metaphors in a negative light in his 1936 essay on Wotan.[76] There are writings showing that Jung's sympathies were against, rather than for, Nazism.[c] In his 1936 essay "Wotan", Jung described the influence of Hitler on Germany as "one man who is obviously 'possessed' has infected a whole nation to such an extent that everything is set in motion and has started rolling on its course towards perdition."[77][78] Jung would later say that: Hitler seemed like the 'double' of a real person, as if Hitler the man might be hiding inside like an appendix, and deliberately so concealed in order not to disturb the mechanism ... You know you could never talk to this man; because there is nobody there ... It is not an individual; it is an entire nation.[79] In an interview with Carol Baumann in 1948, Jung denied rumors regarding any sympathy for the Nazi movement, saying: It must be clear to anyone who has read any of my books that I have never been a Nazi sympathizer and I never have been anti-Semitic, and no amount of misquotation, mistranslation, or rearrangement of what I have written can alter the record of my true point of view. Nearly every one of these passages has been tampered with, either by malice or by ignorance. Page 51 of 52 Furthermore, my friendly relations with a large group of Jewish colleagues and patients over a period of many years in itself disproves the charge of anti-Semitism.[80][d] Evidence contrary to Jung’s denials has been adduced with reference to his writings, correspondence and public utterances of the 1930s.[81] His remarks on the superiority of the "Aryan unconscious" and the “corrosive character” of Freud’s “Jewish gospel” have been cited as evidence of an anti-semitism “fundamental to the structure of Jung’s thought”.[82] Page 52 of 52