Search Results
59 results found with an empty search
- Bad Gays: Issues with HIV activism in gay historiography
The dignified homosexual feels ashamed of every queer who flaunts his faggotry, making the dignified homosexual’s stigma more justifiable in the eyes of straights… Pin it on those who deserve it: sex addicts, people with HIV, anyone who magnetises the stigma you can’t shake. — Michael Warner (1999)[1] Before the release of It’s a Sin, writer Russell T Davies was interviewed in the Sunday Times [2]about the show and his thoughts on HIV, both now and in the 1980s. Although described as “the happiest man [the journalist] had ever met”, his tone gains an “edge” when contemplating men his age contracting the virus in 2021: he is “furious” with them, “staggered” as to how they could do such a thing and harshly underlines that there is “no excuse” to have “unsafe sex” in this day and age. This makes sense under Davies’ logic. Later in the interview he pays homage to antiretroviral drugs (used to treat HIV) and how he believes they ”transformed public attitudes to homosexuality” because HIV had “legitimated” homophobia: “Everything people said about you became true in the shape of a virus. We became a disease, we became ugly, we became wrong. Dangerous and dirty.” This language overarches; “we” as an entire community became diseased, “we” as gay men were all seen as dangerous and dirty. There is no individual, only a vector of disease. From this perspective, gay unity against HIV transmission was necessary for survival: the mainstream population indiscriminately persecuted all gay men regardless of their individual status, so eliminating the virus was viewed as a step towards delegitimising homophobia. Davies’ anger towards those having “unsafe” sex in 2021 stems from his belief in self-policing against HIV transmission. Those who don’t partake in such policing threaten all gay men and lay open the entire community to attacks that gay men are inherently promiscuous, hedonistic airheads who infect all they touch. Their presence destabilizes an equilibrium built on shared fears of transmission. This narrative of the “good” versus the “bad” homosexual needs questioning, especially during LGBT history month when narratives of heteronormative integration are pushed as the peak of gay equality. This narrative obfuscates the humanity of the gay movement, the splintering and infighting, and most of all the radicalism. Michael Warner’s “dignified” homosexual is thus overrepresented in recent gay history, while HIV-positive activists of the 1980s and 1990s have been used to “magnetise” homophobia away from the assimilated, healthy gay history everyone is taught. Unfortunately, this history was not as simple as some would like it to be. Early HIV treatments such as azidothymidine (AZT) were contentious and often fatal – it was not a narrative of steady improvement with universal support, and the medicalisation of HIV was not universally accepted by a homogenous gay community. It’s important to briefly outline the contentious beginnings of AZT. Originally an abandoned cancer treatment, AZT was proposed as a potential HIV treatment by Wellcome – a larger American company with a UK-based trust dedicated to medical research – in 1984. After being fast-tracked through FDA approval in a record 20 months, AZT was presented in 1987 as an effective HIV treatment, claiming to reduce mortality. Wellcome benefited enormously from this breakthrough. In 1987, the company’s share price jumped from 73.5p to 374.5p upon news that AZT would be priced extremely high, at about $188 per 100 capsules for American patients. By 1992 sales of the drug topped $1.4 billion with around 180,000 on the treatment worldwide.[3] However, it slowly began to leak that Wellcome’s 1987 study used to prove AZT’s efficacy was unbased: patients had pooled tablets, the trials became unblinded halfway through, and patients on AZT had received more medical treatments than those on the placebo.[4] It was insufficient evidence for what they were claiming to be the golden ticket of HIV medication. This belief was further shattered by the French Concorde study, the largest study of AZT to date with 1749 patients.[5] In 1993 they reported that across three years 79 individuals died in the AZT group compared to 67 in the placebo group, with those on AZT having far worse side effects. The drug is extremely toxic, causing cell depletion in bone marrow meaning patients could need frequent blood transfusions to survive, and the general side effects greatly mimicked symptoms of advancing AIDS. Whilst a small difference in deaths, the Concorde study showed that at best AZT had no positive effect, and at worst it was accelerating mortality rates. It proved to many that AZT was doing far more harm than good to HIV-positive individuals, yet Wellcome maintained it was still effective as an AIDS treatment. A spokesperson stated “We agreed to disagree [with the study]. There are a lot of HIV-positive patients who are being told the drug now doesn’t work. That isn’t acceptable.”[6]Whilst this can be interpreted as not wanting to let HIV-positive patients down, AZT’s profitability for Wellcome cannot be ignored. They wanted to protect their biggest revenue. Wellcome’s controversial promotion of AZT is what caused some in the UK to question their links with the Terrence Higgins Trust. The Trust is often posited as a hero in recent gay history. Being Britain’s most prominent HIV/AIDS charity, it provided information on HIV transmission for schools and individuals, it sat on AIDS research panels and met with government health departments. Its CEO Nick Partridge cemented links with Wellcome, in 1993 securing £5000 annual donations, use of Wellcome’s printing presses and access to their research labs. In particular, Wellcome funded several HIV education pamphlets, including one with over 9 pages about the benefits of AZT and only one page on alternative treatments. These leaflets were branded by Wellcome and stated “Wellcome is a pharmaceutical company with over 40 years of experience in developing antiviral drugs. They produce and develop AZT – the first drug known to be effective against HIV.”[7] In the wake of the Concorde study, several of these pamphlets were withdrawn but both trusts maintained the effectiveness of AZT in AIDS patients, despite Partridge admitting the “limited” nature of the treatment and stating that adequate treatments could be “many, many years” away. [8] Introducing: Gays Against Genocide. The name stemming from their belief that Terrence Higgins’ association with Wellcome “sacrificed” gay men for profit, GaG was a grassroots movement of HIV-positive men who in 1993 undertook a mass flyposter campaign around London denouncing Nick Partridge and the Trust’s connection to Wellcome.[9] Their posters across London revealed a fissure within the capital’s gay community: the Trust was denounced as an “AIDS Gestapo” coercing vulnerable and scared people into taking AZT. They claimed responsibility for a paint bomb attack on the Terrence Higgins Trust building, they picketed the Great Ormond Street Hospital over their use of AZT on HIV positive babies, and after the Concorde study was released in April 1993 they protested outside the Trust’s headquarters for weeks demanding Nick Partridge’s resignation.[10] Partridge was their main target, accused of “pimping a poison” and being personally responsible for the avoidable deaths of hundreds of people. After being accused of sending Partridge a guillotine in the post, they stated “it was in fact a novelty penis-chopping gimmick from an Amsterdam porn shop, and not a sinister threat.” One of their posters details a moment of police intervention: on 28th April 1993 the police were called by the Trust’s Head of Personnel over “GAG displaying a blow up doll with a necklace of AZT capsules, wearing a sash that says ‘Miss AZT’”. Three protestors were arrested, later stating that police officers “threatened to ‘knock [them] out’”. Whilst this is only one side of the story, the clash exposes the divergence between radical and mainstream gay activism: the overt campiness of a blow-up doll with an AZT necklace diametrically opposes an institution that feels secure enough to use the police against other gay men. The good homosexual feels secure in using an aggressive police force against disruptive members of their own community; the bad homosexual is justifiably punished. Radical versus assimilationist, mainstream versus fringe, queer versus homosexual. In the rare instances that GaG are mentioned by their contemporaries or historians they are condemned as fringe extremists who risked compromising all progress in gay rights; Nick Partridge described them as “New Age flat earther[s] who have a naive hope that Holland and Barrett will produce a herbal tea that is effective against HIV.”[11]A recent post on Dean Street Aids Chronicles stated anti maskers had “the same violent paranoia of Gays Against Genocide.”[12] Their reputation is one of aggressive anti-science akin to COVID deniers. Whilst I do not endorse all their actions, it’s my contention that this reputation stems not so much from their rejection of AZT but their open critique of HIV treatment and policing. The toxicity of AZT is now widely accepted,[13] yet the actions of GaG are still seen as unacceptable and their reputation is still one of fringe subversives. It is the same energy that fuelled Russell T. Davies’ anger towards unsafe sex that drives this reputation: they are seen as threats to the continued liberation of the gay community, a nasty underbelly that threatened to expose the entire community to homophobic attacks. The gay community was necessarily homogenous, and heterogenous elements such as GaG were potential threats to the community’s integrity. The critics of GaG and similar groups tend to overlook the human element of their concerns. Listening to the testimony of those affected reintroduces complexity to this narrative; the Wellcome open forum on AZT in 1993 is particularly insightful.[14] Despite one of the panel denouncing critics as “crazy people”, many stood up to critique their continued prescription of AZT; one man screamed “this is my life and I have waited ten years”; another was “screamed at for being rude” after “hoping to see Wellcome at the Nuremberg trials” for their “scam” drug. Most poignantly, an anonymous individual wrote on a question card: “I have been HIV positive for six years, taking no treatment. All my friends who took AZT are dead. How do you explain that?” The panel had no explanation. Similarly, GaG founder Michael Cottrell stated how upon walking into the Trust “the first thing they did was hand us booklets saying arrange your affairs and make a will.”[15] He describes their actions as feeling “spat” on, as “lighted cigarettes” being thrown at HIV patients telling them to “go fucking die.” This is why painting GaG as simply the actions of unscientific conspiracy theorists troubles me. It was the anger of gay men who felt unrepresented, who held legitimate fears for their own mortality and felt visceral frustration towards healthcare providers who seemed to flippantly change their advice. Not acknowledging this fails both them as individuals and gay history as a whole. The climate GaG worked in perpetuated HIV as a death sentence, and the Don’t Die of Ignorance (1987) government advertising campaign constantly reinforced this belief.[16] John Hurt’s foreboding voiceover states how HIV spreads through sex with an “infected person” and footage pans to “AIDS” chiselled onto a tombstone. Public attitudes towards the disease sentenced the HIV-positive to death upon diagnosis. In this climate of fear AZT was posited as the cure – a cure soon snatched away and labelled as more toxic than the disease itself. Likening GaG’s anger to that of “new age flat-earthers” does incredible injustice to the complexities of their beliefs, and they have as much right to fair historiographical inclusion as those they opposed. For this to occur historians must distance themselves from narratives of good and bad homosexuals. There are none. In this instance, GaG was scared, belittled, reactionary and unapologetically loud; the Trust followed current science and promoted treatment through corporate sponsors in Wellcome. Elevating one above another does not do justice to the complexities of modern gay history, and creating dichotomous narratives of good triumphing against bad in the battle against HIV benefits no one. This is not an attack on Russell T. Davies, or those who hold similar views. It’s an observation that modern gay culture must move beyond viewing HIV-positive activism as an inherently threatening act. Unquestioned statements such as “[AZT] transformed public attitudes to homosexuality” define gay history in terms of heterosexual acceptance.[17] Narratives of unity were created out of fear of heterosexual rejection, and remain because they present uniform narratives of steadily increasing assimilation into heterosexual culture. We deserve more than a history defined in these terms. Eliott Rose is currently completing a BA in History at the University of Oxford (Regent's Park College) and will go on to undertake an MA there next year. Notes: [1] Michael Warner, The Trouble with Normal: Sex, Politics, and the Ethics of Queer Life (Harvard, 1999) [2] Decca Aitkenhead, ‘Russell T Davies on It’s A Sin: ‘Aids was like everything people said about you became true in the shape of a virus’, The Times (17th January 2021) https://www.thetimes.co.uk/article/its-a-sin-russell-t-davies-hiv-aids-rmgnhc8x7 [3] SPIN, (October 1993), p. 96. https://books.google.co.uk/books?id=fsi_VCMy0tQC&lpg=PA116&dq=%22gays%20against%20genocide%22&pg=PA96#v=twopage&q&f=false [4] John Lauritsen, Poison by Prescription: The AZT Story (New York, 1990) [5] Concorde Coordinating Committee, ‘Concorde: MRC/ANRS randomised double-blind controlled trial of immediate and deferred zidovudine in symptom-free HIV infection’, Lance (April 1994), pp. 871-81. [6] Steve Connor, ‘Setback for Aids research as AZT drug failes in tests: Treatment fails to protect healthy HIV-positive people against developing the disease, three-year study shows’, Independent (1st April 1993) https://www.independent.co.uk/news/setback-for-aids-research-as-azt-drug-fails-in-tests-treatment-fails-to-protect-healthy-hivpositive-people-against-developing-the-disease-threeyear-study-shows-1452737.html [7] Catherine Pepinster, ‘AIDS trust rethink on AZT: Catherine Pepinster on why the Terrence Higgins Trust is changing its drug advice.’, Time Out (5-12th May 1993), p. 12. [8] Connor, ‘Setback for AIDS research’, Independent (1st April 1993) https://www.independent.co.uk/news/setback-for-aids-research-as-azt-drug-fails-in-tests-treatment-fails-to-protect-healthy-hivpositive-people-against-developing-the-disease-threeyear-study-shows-1452737.html [9] Gays Against Genocide (GAG), THT: The AIDS Gestapo (1993) [10] ‘Bombing Claim’, Independent (17th May 1994) https://www.independent.co.uk/life-style/bombing-claim-1436678.html [11] SPIN (October 1993), p. 96. https://books.google.co.uk/books?id=fsi_VCMy0tQC&lpg=PA116&dq=%22gays%20against%20genocide%22&pg=PA96#v=twopage&q&f=false [12] Now inaccessible. Previously: https://dean-street-aids-chronicles.com/home/testimonial-wall/ [13] ‘AIDS And The AZT Scandal: SPIN’s 1989 Feature, ‘Sins Of Omission’, SPIN (5th October 2015) https://www.spin.com/2015/10/aids-and-the-azt-scandal-spin-1989-feature-sins-of-omission/#:~:text=The%20toxic%20effects%20of%20AZT,to%20be%20taken%20off%20it [14]SPIN (October 1993), p. 96. https://books.google.co.uk/books?id=fsi_VCMy0tQC&lpg=PA116&dq=%22gays%20against%20genocide%22&pg=PA96#v=twopage&q&f=false [15] SPIN (October 1993), p. 116. https://books.google.co.uk/books?id=fsi_VCMy0tQC&lpg=PA116&dq=%22gays%20against%20genocide%22&pg=PA116#v=twopage&q&f=false [16] AIDS: Monolith (1987) https://www.youtube.com/watch?v=iroty5zwOVw [17] Aitkenhead, ‘Russell T Davies on It’s A Sin’, The Times (17th January 2021) https://www.thetimes.co.uk/article/its-a-sin-russell-t-davies-hiv-aids-rmgnhc8x7
- Fanaticism, Sensationalism and Obsession during the Lunacy Panics of 19th Century England
Abstract Victorian society has often been thought of one of upstanding morals, rigid etiquette and sombre tones. Yet the lunacy panics of the nineteenth century in which a wave of mass hysteria swept the nation that at any one point in time, one may be swept off the streets or even in the comforts of your own house and thrown into a lunatic asylum, stands in perfect opposition to this fabrication imagining of what Victorian society must act like. It was a fanatic fear that entrenched itself into the mindset of every Victorian individual and was further perpetuated in the unending wave of sensationalism literature and journalism. The obsession with not ‘seeing’ madness and the fear of having owns own sanity questioned planted itself into the Victorian subconscious. In many ways, the lunacy panics can be understood as an epidemic that nearly turned the entire English population mad over their fear of madness. This essay will hence dissolve this fanatic obsession into its most intrinsic forms, exploring the cultural underpinnings of this phenomenon and examining the literary use of sensationalism and fear-mongering to further spear-head the hysteria of the lunacy panics. Introduction Madness and its manifestations have always been an uncomfortable, bordering on untouchable, topic for the general public to reconcile with. Yet no period in history can this cultural anxiety be more astutely observed than that of Victorian England. During this period, lunacy and madness pushed itself into the forefront of the Victorian consciousness, culminating in what is now known as the ‘lunacy panics.’ Victorian England found itself infected with a wave of uncontrollable hysteria and paranoia that, at any one point in time, one may be falsely accused of insanity and condemned into the oblique depths of the asylum. It is no mere exaggeration that the probability of wrongful confinement was deemed a fate worse than death to the Victorians. To have one’s reputation damaged, their sanity impeached upon and their dignity reduced by confining them into the reclusive box of ‘lunatic’ or ‘maniac’ was seen as the worst tragedy an individual may be subjected to. Yet as with today’s contemporary society, it is often tales of tragedy and morbidity that enjoy the most attention and favour amongst the media and its consumers. The newfound accessibility of print media and rise of literacy levels in the nineteenth century meant that wrongful confinement stories were not simply regulated to drawing room gossip but instead propelled into an unprecedented and unfathomable court of public opinion.[1] The sheer magnitude of media coverage regarding wrongful confinement allowed itself to spiral from an initial small nervousness into a full-blow epidemic of mass hysteria and fanatic obsession. Newspapers and tabloids gossiped constantly on the dangers of wrongful confinement, stroking that fear ever-increasingly. Sensational fiction novels were published in rapid succession, understanding this new niche market of public infatuation with the horrors of wrongful confinement. Any individual with means who found themselves experiencing the trauma of wrongful confinement were almost possessed by external forces of need and demand to publish their narratives for reader consumption. Yet the question remains of what caused this obsessive hysteria in Victorian society regarding the concept of wrongful confinement. The lunacy panics were not simply a wholly unprecedented phenomenon yet can rather be understood as the accumulation of years of attempting to grapple with the moral and ethical complexities of madness as well as the negative stigma and stereotypes enforced by its conception. Some would argue that the main cause of this hysteria was representative of an underlying true problem in the asylums systems and the so called ‘trade in lunacy.’ Wrongful confinement narratives have two main characteristics: that the person is automatically unquestionably sane and they also are a member of the upper to middle class. There was an inherent belief that private asylums were ripe with corruption and that asylum doctors, spurred on by greedy, malicious family members, would confine any sane individual for a fraction of profit.[2] It is no wonder that this belief whipped the Victorian public into a state of uncontrollable frenzy: it had all the hall-marks of a cause celebre of intrigue, corruption, betrayal and the ultimate tragic ending.[3] It hardly mattered that the private asylums had initially been established to cradle the proprietary and respectability of the upper classes by offering a private and discrete sanctuary away from the public eye, it had been transformed in the popular consciousness as a place of irrefutable horror for the same clientele it was built for.[4] However, the purpose of this essay is not to examine the credibility of this abhorrent belief of private asylums entrapping wealthy patients for monetary greed, but rather to examine the surrounding cultural and literary pillar stones in which the lunacy panics rested upon. As historian Michel Foucault summarised, ‘language is the first and last structure of madness.’[5] Therefore, this essay will hence focus on the language used and manipulated by first-hand narratives, sensationalist journalism and fiction to exacerbate the existing paranoia regarding wrongful confinement. In a large part, this obsessive fear-mongering stems from one underlying cultural anxiety of madness: of the public’s inability to grapple with the concept of intangible madness. By that I mean, the literary media found purchase on the platform of wrongful confinement by selectively choosing to manipulate the greatest fear surrounding it; that there was no discernible physical difference between sanity and insanity and thus anyone could theoretically be accused of madness. The concept of ‘seeing’ madness had been for decades the only thing necessary to draw a line between the ‘other’ of lunatics and the ‘us’ of good, proper society. Yet, as will be discussed, this theory soon began to fray at the edges admis a growing dissonance of psychiatric pessimism and distrust. This cultural anxiety of not ‘seeing’ madness, of it now being a hidden, undetectable virus that may infect anyone on a whim, became the intrinsic point of concern for any rational Victorian individual.[6] As such, the intangibility of madness became the foundational cornerstone of which the media could freely manipulate for their own personal agenda. Therefore, the following chapters will carefully examine and analyse the literary account of wrongful confinement, devolving them into their basic forms of unregulated fear-mongering and sensationalism through the opportunity created by the Victorian obsession of ‘seeing’ madness. I. ‘Seeing’ Madness Undeniably the most critical aspect of the contemptuous relationship between madness and its surrounding cultural understanding is that of its representation. The Victorian obsession with ‘seeing’ madness was not something unprecedented or unfounded but rather the apotheosis of centuries of cultural anxiety regarding the manifestation of madness. Madness in of itself is something filled with complexities and misunderstandings, and the only conceivable way it could be brought and integrated into popular culture was for it to be reduced down its basest form. Society is a fickle construction and the unimaginable unknown madness presented was inherently far too much for it to accept without creating some tangibility in which it could grasp the concept. If medical bodily diseases presented themselves in a physical manner, it would hence follow suit that madness could be derived from that same physicality. As such, for much of early modern history, a clear distinction has always been made between the mad and the sane. As early as the thirteenth century, madness was represented in art and literature in a way that made it innately obvious that it was something ‘other’ and could be perceived as such.[7] There was an overarching consensus before the nineteenth century that madness was something incredibly tangible and physical. Artistic renditions of the mad ensured that the message that the mad were deviants and ‘others’ to society, that their madness could be discerned from their outward appearance, was hammered home into the popular consciousness.[8] Madness was a manifestation of the bestial side of humanity and, as such, reduced the mad back down to their primordial self. In order for society to rationalise madness, and hence their innate fear of it, they needed to separate it completely from humanity. Furthermore, reducing the mad to a mere beast hence meant that their subsequent treatment and stigma surrounding them was not only rational but morally ethical.[9] Figure One below is an exemplary imagining of what nineteenth century artists, and indeed the general public who consumed their work, believed madness to look like. In Figure One, we can see the blatant biases and stereotypes madness was assigned. Madness was a complete divergence from humanity: it was viewed from an almost zoo-like perspective with the insane being mere animals to observe for the entertainment of the public. It was a spectacle in which the sane could observe, comfortable and reassured that a tangible line divides them from the ‘others.’ As such, it was common practice for large gatherings to flock to public asylums, such as the infamous Bethlem Hospital, to gawk at their patients.[10] Doing so fulfilled the latent voyeuristic desire of morbid curiosity whilst serving as an almost prophetic warning to the visitor of what they may succumb to. The image of the Bedlamites, so clearly distinct in their rags and physicality, the very epitome of madness, was fast becoming the stock trade of what madness must look like.[11] Ergo, there was little for the public to worry if they were so evidently on the other line of physical madness. Yet, what can be evident throughout the nineteenth century, is that the line between the physicality of sanity and insanity begins to blur and a new grey area of the borderland of what madness looks like opens up, an area in which the population becomes severely uncomfortable with as it subsequently means that they could now be falsely accused of insanity. Perhaps one of the most fundamental factors that precipitated the lunacy panic’s hysteria over the inability to recognise madness was the newly emerged field of psychiatry and the societal demands that were placed upon it, most integrally the absolute clarity on what madness entailed. As the lunacy industry began to become industrialised and commercialised, with the establishment of numerous private and public asylums, numbers of alleged lunatics sky-rocketed.[12] While this phenomenon could be caused by several factors, or indeed could have been artificially inflated by the notion that the mad must no longer be confined in the domestic sphere anymore, for nineteenth century society it seemed obvious that the problem was there was a very real and dangerous threat of contagion. The population began to worry that the growth meant that they themselves were now more susceptible to madness, that it lurked under the city like a virus.[13] As a report from the Commissioners in Lunacy stated ‘the opinion generally entertained was that the community are more subject than formerly to attacks of insanity.’[14] With the seemingly obscure and idiopathic disease of lunacy looming in the horizon, paranoia swept English society, specifically on their inability to perceive madness. As anxiety increased over the unseen ‘other’ who threatened to completely dismantle functioning society, the general population had but one choice: to implicitly trust the doctors who were the self-diagnosed experts in ‘seeing’ madness. Asylum doctors were seen as the very pressure point in which Victorian psychiatry was placed: they were trusted implicitly to know what every variant of madness entailed, how best to treat it and how to finally cure it. As psychiatrists defended, it was only them who could properly and accurately detect latent lunacy, and it was only them who could hence purge them from society. If not for them, ‘incipient lunatics’ would ‘pass about the world with a clean bill of mental health,’ and society would all but be ruined.[15] As such, the fulcrum of the popular imaginings of what madness looked like was placed firmly at the pedestal of asylum doctors. In order to grapple with this monumental cultural pressure, asylum doctors devoted themselves to determining the physical characteristics of madness and thusly representing them. The theory of physiognomy, that of physical characteristics that are indicative of certain genus of madness, began to emerge in the medical discourse as a solution to fix the newfound fear of madness as a latent and chronic disease infecting the nation. Asylum doctors theorised that madness could be distinguished based on certain physical characteristics, such as the shape of the nose, size of the head, positioning of the spine and even skin colour. [16] Doctors such as Cesare Lombroso fuelled this obsession with publishing works that theorised criminal madness could be discerned based on simple anomalies in the patient’s skeleton and face.[17] Manuals that classified a host of mental disorders based on their outward appearances were published for other doctors to examine, such as that of Duncan and Millard of the Eastern Counties Asylum in 1866.[18] This fanaticism with classifying madness on arbitrary physical characteristics had been created as a remedy for the societal anxiety over the perceived infection of lunacy, yet it proved to only further the obsession with visualising madness. By placing the asylum doctors as the only ones with expertise enough to differentiate sanity from insanity, in doing so they created a sense of extreme restlessness for the general public in the unknowability of the perceived differences.[19]This public distrust in psychiatric evaluation to deviate sanity from madness can be seen to be expressed in the following quote from recovered alcoholic William Griggs in 1832: "Our learned medical men…assert most positively that almost every person afflicted with mental derangement presents a new case, but do not tell us by what means they discover a person to be of unsound mind."[20] It appears that Victorian society was fundamentally aware that the arbitrary differences between sanity and insanity as lectured by asylum doctors was losing credibility yet seemed unable or unwilling to sunder themselves from the familiar comfort of ‘seeing’ madness. However, when that visualisation could not be made, the consequences were all the more confounding on Victorian anxiety. For instance, John Thomas Perceval was an Englishman who was confined in a private asylum after the traumatic death of his father in the 1830s, leading to a brief psychotic break. Although his initial confinement was sound, Perceval soon found himself recovered of any mental deficiency and as such logically expected his subsequent release. Yet the asylum doctors disagreed and continued to confine him. Finding himself confined in an asylum surrounded by the ‘true’ incurable madmen, Perceval devolved into the fanatic obsession with visualising a physical difference between himself and his surroundings. He began to carry a small pocket mirror in which he would continuously take out to see if his face had unknowingly morphed into that of a madman’s.[21] Furthermore, the use of photography in the 1850s further enhanced the rapid obsession of visualising lunacy. Unlike previous years, which relied on the artistic embellishments of painters or writers, lunacy could now theoretically be depicted as accurately as possible. Devoid of the dramatics, society was confronted with lunacy that looked innately human as they were. As Hugh Diamond in the Surrey Lunatic Asylum in 1856 describes, photography acted as a mirror for lunacy from which it can finally be freed from the ‘painful caricaturing which so disfigures almost all the published pictures of the insane.’[22] This seemingly pure objectiveness of photography garnered much contemporary medical support and photographs of lunatics were heavily incorporated into medical textbooks.[23] However, to declare photographs of lunatics as distinctly objective would be of the highest ignorance. While they could not physically distort the facial shape or bodily autonomy to something primitive as artistic licence had done so in previous years, they were still focused on producing a very mechanical and biased image of what madness looked like. Diamond, in particular, would pose his subjects and had particular influence on what they wore, how their hair was styled and any props they would use, such as giving the religious monomaniac a cross.[24] Oftentimes, female patients were deliberately styled to represent common cultural understandings of what madness looked like, such as the popular image of Shakespeare’s mad Ophelia, as shown in Figure Two below. Despite Diamond proclaiming that his photographs were truly objective accounts of what lunacy looked like, they were artificial and manufactured. What is further prevalent is that asylum doctors continued to use these manufactured images as almost a manual in which to diagnose mental illness in further patients. Prominent psychiatrist John Conolly published a case study on Diamond’s photographs, in which he commented that the subjects’ donning of a bonnet indicated her full recovery, despite it being a staged ensemble.[25] It seemed that although madness had now been declared as sparing the artistic licence that had painted it as something brutish and animalistic, it was still subjected to the inherent notion that it could still be visualised. What was inevitably caused by this entrenched obsession with visualising madness was the perfect cornerstone for fear-mongering sensationalism to rest upon. Despite the logical awareness that there was no tangible, physical characteristic between the sane and insane, or at the very least that such a visualisation was so minute and circumstantial that it could only be diagnosed on the arbitrary whims of psychiatric alienists, the public seemed resolute in their obsession that there must be a divide. By reassuring themselves of that imaginary divide, Victorian society could hence soothe themselves that the devastation of madness would never fall upon themselves.[26] Yet what the print media decided was that instead of reassuring the public of the boundaries of physical madness, they could instead push the envelope, so to say, on the horrors of that boundary dissolving. If they were not blatantly fear-mongering, as will be examined in certain texts and articles later on, they still fell prey to the ingrained cultural narrative of the visualisation of madness, ultimately culminating in epidemic spikes of hysteria and paranoia surrounding wrongful confinement. II. The Undeniably ‘Sane’ The cultural paranoia regarding wrongful confinement did not stem from nowhere, it was instead instilled by two key factors, the dilemma of not being able to ‘see’ madness and the skilful fear-mongering utilised by the sensationalist press, both of which were in of themselves ignited by instances of prolific and actual wrongful confinement. While we have discussed the precursor of the lunacy panics, the rising wave of cultural anxiety regarding the inability to ascertain madness, it is now imperative to examine the real life instances of this inability. In the latter half of the nineteenth century, a few notorious cases of wrongful confinement appeared. Wise’s Inconvenient People is perhaps the most detailed piece of work centred around the victims of wrongful confinement in which she estimates that there were 28 alleged cases of wrongful confinement throughout the nineteenth century.[27] For the purposes of this brief enterprise into the hysterics of the lunacy panics, only three notable cases will be discussed. What is important to note is that these cases have one thing in common: the alleged victims of wrongful incarceration are all middle to upper class. Now, whilst one may look at this as indisputable proof that there was a deeper conspiracy ongoing of rapacious family members and corrupt asylum doctors, this is not the direction this essay will be taking. As McCandless highlighted, it was not the fact that the upper classes were more innately prone to being misdiagnosed as insane, for whatever taciturn motive, but rather they had the money and the means to publish and argue their cases.[28] Georgina Weldon, herself a victim of wrongful confinement and a character that will be examined thoroughly shortly, stated that if she had ‘been quite a poor woman, unable to pay printers…I should have been quite ruined long ago.’[29] In fact, people who were declared insane who had monetary asserts or property were automatically entitled to a court inquisition to determine 1) the legitimacy of their insanity and 2) if they were legally, by the jury and judge, and medically, by two qualified physicians, deemed insane, whether their asserts would hence be distributed accordingly to their next of kin.[30] Theoretically, the upper classes would be least likely to be wrongfully declared insane based on the sheer amount of legal and public obstacles needed to be faced. If anything, the bourgeoisie were almost hesitant to confront the publicity of madness as to do so would bring sure ruin and scandal upon one’s own family and name.[31] The very conception of private madhouses was created in the Georgian period to bring discretion and anonymity to madness prevalent in the upper classes.[32] Yet in the latter half of the nineteenth century, madness was pushed to the very forefront of the public eye. It was inherently impossible for any person of relative wealth or class to be secretly enclosed in asylums without a formal, and very publicised, inquisition. Even if no inquisition was held, which often happened when the accused lunatic was released almost instantaneously from the asylum, the victim could then bring their traumatic ordeal to the press or publish it themselves. These autobiographical accounts of wrongful confinement are an intriguing dive into the mindset of a Victorian individual faced with perhaps the most insidious of brands to hold: that of being, at one point in time, thought of as insane. These accounts oftentimes stood as warnings for the perceived corruption and danger associated with the private warehousing of the insane, yet while they often advocated for tighter restrictions on the certification of insanity, dichotomously they also never once denied that asylums were needed to confine the ‘true’ lunatics. To harken back to the previous chapter, these autobiographical accounts proved inescapable to the hypocrisy of the concern for personal liberty regarding wrongful confinement yet simultaneously insisting that asylums were needed to protect society against the dangers of lunacy.[33] As such, the true inherent purposes of these accounts were not for reform advocacy or an injunction for wrongful confinement, but rather can be seen to be undeniable, legitimate and unquestionable proof of the author’s sanity. By constructing their own narratives of their confinement, they were hence able to save themselves from the brutal scrutiny and judgement the public had towards anyone they believed to be ‘other’.[34] As asylum doctor Noble highlighted, often the victims of wrongfully confinement were never really wrongfully confined but rather possessed an inability or unwillingness to recognise their own mental deviance, that they will ‘content that there had been no insanity.’[35] Take for instance the case of Herman Charles Merivale who was confined in a mental asylum following a depressive break. He went on to publish a record of his time spent in the Ticehurst asylum in 1879 yet intriguingly the title of his account is My Experiences in a Lunatic Asylum by a Sane Patient.[36] From the very outset of his account, Merival finds it imperative to declare to the Victorian public that at no point was he actually suffering from psychosis. There are further discrepancies in his account that conceal his potential latent lunacy, including the exclusion of his suicide attempt as well as the embellishment that he was released on the good, sound advice from two certified doctors rather than the reality which was his mother personally ordering his release.[37] As such, these narratives can hardly be trusted as being an authentic and objective account of wrongful confinement: they will always appear with the underlying cultural bias that insanity is out of the question for the author. Perhaps one of the most exemplary narratives that underscores this deeply entrenched bias regarding lunacy and the question of wrongful confinement is that of Georgina Weldon’s. Weldon had escaped and hid from a lunacy order in April 1878 that was signed at the bequest of her estranged husband, Harry Weldon, who had grown tired of paying £1,000 a year to fund his wife’s obsession with her orphanage and child choir, as well as rumoured indecent personal relationships with other men and women, which had brought considerable shame onto his family name.[38] Despite never having actually been confined to an asylum, Weldon published an account of her experience in fighting the tyrannical conspiracy of wrongful certification and went on to win a litany of legal cases against her perpetrators, the asylum doctors, all of which was highly covered in the Victorian press.[39]Upon an examination of her account, it is distinctly evident that Weldon was simultaneously a victim as well as a perpetrator of the paranoid fears regarding lunacy. Her account first opens with an unusual self-introduction as a woman who dislikes ‘long dresses and very full skirts,’ who ‘wore [her] hair short’ and who was the very opposite of young, vain woman who amused themselves by ‘making [themselves] look conspicuous.’[40] Here, it can be determined that Weldon is unequivocally asserting her sanity. By describing her physical characteristics, she is subtly suggesting that there is nothing that could possibly be deemed as an indicator for potential latent insanity. She further states that if she were to have fallen into the plot carefully constructed by her husband and the asylum doctors of being confined to an asylum, then she would have been ‘driven mad in an hour.’[41] The most imperative facts Weldon regards in her narrative is that of her unquestionable sanity. While she stipulates in her opening paragraph that the reader is under no obligation to believe her story without necessary proof or evidence, Weldon is determined in her regard to rid herself of any and all association with lunacy. Weldon’s account is further significant in regard to its sensational impact on society and its deliberate use of fear-mongering. Weldon was a highly infamous and popular figure, especially in that of the Victorian press. She is reported to be somewhat of a celebrity amongst the lunacy discourse, reputed to have command and sway over as many newspaper columns as a cabinet minister.[42] Her publicised case in How I Escaped the Mad-Doctors as well as the ensuing cases against her husband and the doctors who certified her lunacy order propelled her into the pantheon of popularity and Weldon continuously wielded this to her advantage. She was able to sign onto numerous lucrative brand deals, such as the advertisement for Pears’ Soap as shown above. She was distinctly aware of her potential influence over Victorian society, stating openly in her pamphlet that ‘To be accused of “insanity” is, I really believe, a royal road to popularity.’[43] The inherent taboo nature of insanity made it almost ambivalently popular in the Victorian public, yet the only way to avoid scandal and degradation was to legitimise and assert one’s own sanity.[44] What this fundamentally signifies is that the Victorians were keenly aware of the scandalous popularity lunacy issues generated and how that popularity could be utilised to further one's own personal or even a wider societal agenda. Ergo, these personal narratives of wrongful confinement cannot be fully appreciated without understanding first the underlying biases and stereotypes they were built on. These accounts were not personal mementos, they catered and pandered to the public perception of lunacy and its appearances. Even if they declared themselves to be a rallying cry for asylum and certification reform, they proved to only further the mass hysteria and paranoia regarding wrongful confinement. As Weldon writes The object of this Lecture, therefore, is not as some people say, to cater for the public’s pity or sympathy for myself, but it is written and read for the purpose of rousing their indignation, their righteous wrath, and to force Parliament to amend state of things which is so monstrous that it seems fabulous….on behalf of many thousand victims, now lingering in these horrible dens among idiots, raving maniacs and deranged simpletons, of which the sight for half an hour only is enough to drive one out of one’s mind.[45] This statement can be regarded as being in perfect accordance with the underlying issues of the publicity surrounding wrongful confinement: Weldon admits her writing is specifically designed for the purpose of heightening the reader’s emotions, possibly infringing upon the ethicality of fear-mongering, whilst contradicting herself by calling for Parliamentary reform yet implying that asylums are still deemed necessary to house these ‘raving maniacs.’ Wrongful confinement is not a story about compassion or empathy regarding asylum reform but rather a fanatic obsession amongst the sane to create a superimposed image of the insane and the asylum as something depraved and abhorrent. These ‘survivor’ narratives can therefore be largely considered as sensationalised and dramatic accounts of the horror of one’s sanity being aligned with these hyperbolic imaginings of insanity. Nevertheless, there is, in fact, one account that does not seem to exude this biased narrative of sanity being impeached by wrongful confinement: that of John Thomas Perceval. In stark contrast to Merivale’s title deliberating prefacing his sanity, Perceval’s title reads A Narrative of the Treatment Experienced by a Gentleman, during a State of Mental Derangement.[46]Perceval does not hide that, although he is a ‘gentleman’, he indeed suffered a mental breakdown and was thus justly confined in an asylum. Instead, the nature of his issues with asylums stems from his denial of his freedom and entrapment within the confines of the asylum walls after he deemed himself fully recovered. Perceval advocated for stronger and stricter managerial oversight of asylums and their patients so that those who recovered their sanity could be rightfully released to society. The sheer magnitude of bravery to publish an account of a brief dip into mental illness during a time of heightened fear and stigma surrounding lunacy is remarkable. However, that is not to say that Perceval was completely immune to the aforementioned stigma and instead faced insurmountable backlash upon his publication. He had first published the account anonymously in 1838, which went largely unnoticed, yet his revised second volume published two years later and featured his name was relentlessly torn apart by the print media, with scathing criticisms that Perceval had not taken proper care to distinguish himself from the ‘lunatics of inferior rank.’[47] It seemed that while Perceval was willing to admit his brief descent into mental destitution, the public was not. Society could still not grapple with the fact that someone would willingly and knowingly admit to succumbing to mental illness. Contemporary asylum doctor Granville best showcases the extent of the phenomenon of Perceval’s account as he states that ‘it is inconceivable that a man of position and culture would allow his family to have any connection with an asylum.’[48] To even have one’s name associated with lunacy was guaranteed to make one a social pariah yet Perceval stands against the common grain of wrongful confinement narratives by openly embracing his lapse of mental fortitude. While the autobiographical accounts of wrongful confinement offer a rare and personal insight into the atmosphere surrounding the lunacy panics, one must be advertently cautious not to fall prey to the ingrained biases and subjectivity evident in these accounts. In large part, these accounts hold all the characteristics of the causes and concerns with these lunacy hysteria. They pander a carefully constructed image to the public of their undeniable sanity being questioned by either indolent doctors or evil family members to avoid any possible defamation in being remotely associated with lunacy. These accounts were primarily written exclusively for the public digestion, creating a controlled narrative that would exonerate the writer from any taint that the mere mention of insanity brought. Yet in doing so, they further the existing paranoia and fanaticism encompassing the issue of wrongful confinement, for the average Victorian reader is unable to separate the truth from the constructed. Furthermore, these accounts proved that there was a ravenous demand amongst the Victorian public for cautionary tales of accused insanity, horror and woe. What this hence created was a platform in which sensationalised fiction and journalism could then leap of from, further distorting the boundaries of reality and fiction, and fuelling the fanatic obsession with the ever-looming threat of wrongful confinement. III. ‘The Ghosts of Newspaper Writings.’ The emergence of the autobiographical accounts of wrongful confinement created a marketable atmosphere in which writers were now keenly aware of what the rapacious Victorian public so desperately consumed. The ‘reality’ of these accounts helped bring a perceived sense of ensured credibility to the fiction of sensational novels whilst allowing the author the creative freedom to embellish the details to heighten the drama and horror. The late nineteenth century saw the epidemic of the lunacy panics culminate in the emanation of sensational fiction dealing exclusively with madness and the notoriety associated with wrongful confinement. Most prolifically, we see two main sensational works of fiction that perfectly encapsulates the hysteria regarding lunacy: Wilkie Collin’s 1859 The Woman in White and Charles Reade’s 1863 Hard Cash. These authors were able to capitalise on the popular cultural anxiety encompassing madness and wrongful confinement in order to push their works to the forefront of Victorian public discourse and consumption. By utilising the common cultural knowledge of what madness entailed, that has been supplied by the idea of ‘seeing’ madness and only further enhanced by personal publicised narratives, authors were able to create a sense of familiarity whilst simultaneously drawing on an imagined sense of fantasy and horror.[49] It was just real enough to be believed, as it had proven to happen to figures such as Weldon and Merivale, but the added touch of sensationalism propelled the work into the stratosphere of horror and fear. It was not only literary fiction that capitalised on this market; vapid tabloid journalism also deliberately manipulated this cultural paranoia, specifically targeting the unease regarding ‘seeing’ madness. As such, the Victorian public found themselves in an unending cyclical relationship with madness and literature: they had both their concerns tangibly expressed in the writings while that said writing was also fuelling and supplying that concern. As such, if it can be declared that the fear of not recognising madness and the publicised cases of wrongful confinement ignited the lunacy panics, then sensationalist novels and journalism was the fuel that kept it going. Firstly, let us examine the parallels between fiction and reality evident in these sensational works and how they were utilised to expedite fear-mongering. The wave of publicised first-hand narratives of wrongful confinement meant that the Victorian public became fanatic with drawing parallels between reality and fiction, often blurring the two together so they became indistinguishable. Sensational fiction required a decorum of exaggerated horror or drama and the exploration of madness fulfilled all these necessary requirements.[50] Yet the Victorian population, bombarded with narrative accounts, naturally assumed that hyperbolic sensationalism was not a mere work of fiction but rather indicative of the actuality of the dangers presented by wrongful confinement. It did not help matters that sensational fiction often borrowed storylines from actual cases, further confusing the line between expressive horror and reality. For instance, Collin’s The Woman in White can be seen as loosely based on an infamous French case of wrongful confinement in 1787.[51] Reade’s Hard Cash further utilised real cases to bring a sense of credibility to his work, oftentimes directly citing infamous cases in his novel, thus greatly inflating the impact of its horror.[52]Furthermore, this was not a one-sided relationship between reality and fiction. Whilst sensational fiction borrowed heavily from reality, influenced by the heavily publicised narrative accounts, they were in turn referenced in official documents, legislation and newspaper commentary as the objective mirror of the threat of wrongful confinement.[53] Therefore, the horror tropes used in sensational fiction were often regarded as fact, only exacerbating the cultural tension existent during the lunacy panics. A predominating theme in sensation novels and journalism revolving around madness that further stroked the fanatical hysteria during the lunacy panics relates back to the early chapter of ‘seeing’ of madness. As what has been previously established, both the Victorian public and the pioneers of psychiatry, the asylum doctors themselves, seemed to have an almost frenzied obsession with identifying the visual cues of madness in a lunatic’s appearance. The inability to do so created a paramount sense of fear and anxiety. This obsession stemming from the inability to visualise madness meant that either the sane were being denied their liberty, dignity and respectability by being wrongfully confined in asylums or that the mad were an unseen, uncontained festering disease of Victorian society. Victorian journalists and authors latched onto this controversial topic, pushing a very deliberate narrative on madness onto the public and using the horror of insanity to garner publicity and popularity.[54] The horror of not ‘seeing’ madness was the perfect cannon fodder to propel an author into the public discourse. In Collin’s The Woman in White, he plays upon this specific trope in very opening chapters of his novel by having the protagonist, Walter, come across a lone woman one night on his way home. Upon meeting her, Walter notes that ‘there was nothing wild, nothing immodest in her manner,’ thus planting the seed in the reader’s mind that this is a perfectly normal individual.[55] Yet upon his departure from this woman, later known as Anne Catherick, he is informed by two police officers that she is an escaped lunatic. Walter’s statement following this dramatic revelation perfectly encapsulates the trepidation regarding insanity existent in Victorian statement: "What had I done? Assisted the victim of the most horrible of all false imprisonments to escape, or cast loose on the wide world of London an unfortunate creature whose actions it was my duty, and every man’s duty, mercifully to control?"[56] It is important to note Collins’ use of the word ‘creature’ here to describe the ascertained lunatic. If Anne is a victim of wrongful confinement, then she is human, an individual to be greatly sympathised with. However, if she is indeed insane, then she is transformed back to that preconceived notion of the bestial lunatic, something ‘other’ that is far removed from humanity. This dichotomous dilemma that Walter undergoes is a direct representation of the fanatic obsession amongst Victorian society and Collins continues to target that anxiety throughout the novel, culminating in the sane character of Laura Fairlie being falsely assumed as the insane Anne and being confined against her will. As such, the truly popular sensational fiction regarding lunacy thrived in their infamy by manipulating the two branches of anxiety amongst Victorian society: the fear of being wrongfully deemed as insane and the sister-prong of not recognising a madman and allowing lunacy to run rampant, undetected, through the streets. To incorporate both of these fears into their fiction, authors were able to heighten the sense of horror and psychological torment evident in their writings. Sensational fiction was not the only media that blatantly engaged in this level of fear-mongering: journalism also became the predominant expression of this fear and obsession. Journalists heavily exploited the theme of seeing madness in their articles, often visiting asylums and writing of the unseen appearance of lunacy. An illustration of this obvious manipulation of cultural obsession can be observed in an article written by Charles Davies in 1875 regarding a visit he paid to the Hanwell asylum during their Christmastime ball. He wrote of a specific lunatic he encountered during his visit: "He looked, I thought, quite as sane as myself, and played magnificently; but I was informed by the possibly prejudiced officials that he had his occasional weaknesses."[57] This inclusion of being startled of the seemingly bizarre look of normalcy in lunatic patients can be considered both a product of the paranoia regarding unseen lunacy as well as promotion and exacerbation of that said fear. The use of ‘possibly prejudiced officials’ further suggests to the reader that the asylum keepers themselves could not be trusted with the task of identifying madness. It seems that Davies, in his complete inability to process that the ‘raving lunatics’ he expected to see looked like himself instead, calls into question the credibility of asylum doctors and their self-proclaimed expertise in madness. Perhaps, Davies suggests, this man is not insane at all, for if he were, he would not look as sane as Davies undeniably was. While this may be construed as a newfound aspect of humanising the insane, the underlying tonal shift of these articles make it evidently clear that their agenda is not to bring advocacy or compassion to certified insane, but rather to sow the seeds of dissonance and obsession between the public and the looming threat of madness. This cultivation of the obsession and fear surrounding the lunacy panics can be seen when Davies concludes his essay with the question that ‘would haunt me all the way home was, which are the sane people, and which are the lunatics?’[58] This closing conclusion of asylums and its inhabitants serves an almost ensured propellant into the pantheon of Victorian gossip as it directly targets the obsession with visualising madness. Another journalist, Andrew Wynter, reported his visit to the very same asylum, that he could see ‘No disorder, nor anything that would indicate that the company were lunatics.’[59] In a way, this format of journalism can almost be classified as psychological horror: by tormenting their readers of the growing fine line between insanity and sanity, it appeals directly to the deepest inner obsession and fear of the Victorian public. Reade’s Hard Cash can be seen to further this psychological torment in the concluding sentence in the chapter in which the protagonist, Alfred, is seized and wrongfully confined in an asylum: "Pray think of it for yourselves, men and women, if you have not sworn never to think over a novel. Think of it for your own sake: Alfred’s turn to-day, it may be yours to-morrow."[60] Reade here is directly challenging the reader to face the fate deemed almost worse than death: to be wrongfully declared as insane and subsequently confined. It is not simply that these novels and articles were simply reporting on the phenomenon of wrongful confinement: they were deliberating, targeting and manipulating the underlying cultural anxiety of being branded as ‘mad’. Such brazen tactics are a clear marker of fear-mongering. This influence of sensationalist novels and journalism over stroking the obsession of the Victorian public about the perceived threat of wrongful confinement did not go unnoticed by the practitioners of psychiatry. Many mad-doctors published their own works, condemning the media for deliberating frenzying the Victorian public and thus, in actuality, deterring improvements and developments in the asylum system.[61] If the Victorian public was infected with the obsession of the potential of being victims to wrongful confinement, then the asylum doctors were infected with a similar obsession of disproving this fear through an all-out media war. The editor of the Journal of Mental Science John Charles Bucknill wrote condemning the deliberate fear-mongering evident in novels and newspapers in 1858, stating that ‘the sane people confined in lunatic asylums…are ghosts of newspaper writing.’[62] He further accused authors and journalists of being ‘panic-mongers’ who deceived and misled the ‘too credulous public.’[63] Bucknill was not isolated in his scathing criticism of the fanatic alarmism evident in the print media. Asylum doctor George H. Savage wrote that sensationalism was deliberately creating a ‘state of panic’ in the public opinion surrounding lunacy of which the gullible public were all too ‘easily and falsely…led’ by the melodrama evident in the media.[64] Yet in the face of the overwhelming saturation of sensationalism and the inherent biases it perpetuated, asylum doctors found their voices lost in the swarm. Little could be done to tide over the epidemic of Victorian hysteria. Victorian society had been all consumed by the obsession of the lunacy panics and until the legislation of the 1890 Lunacy Act was introduced that set up clear, inescapable blockades in the asylum system to prevent wrongful confinement, there was little to no rationality that could be evident. Ironically, in society’s obsession with seeing madness and preventing the alleged threat of wrongful confinement, they had almost driven themselves mad. Conclusion The impact lunacy panics had cultivated on society is profound in both the immediate short-term effect as well as ever-lingering in our own contemporary society. Most immediately, the lunacy panics had an extreme detrimental effect on the development and acceptance of psychiatry. The sheer horror of potentially sane individuals being condemned to asylums meant that subsequent asylum legislation and reform was catering towards reforming the certification process and establishing a managerial head/inquisition to ensure no one was wrongfully confined instead of being catered towards reforming the actual treatment of the insane. The lunacy panics themselves were only salved by the asylum reform act of 1890 which stipulated that anyone who wished to have a patient confined privately was required to have a full and thorough examination and inquisition, with the absolute requisite that the certification of lunacy must be proven beyond a reasonable doubt.[65] The strict enforcement of these regulations proved to the masses that the probability of wrongful confinement was now far beyond the popular imagination. Yet the same care or consideration was not given to those who were deemed creditably insane as we can see in the continued devolution of asylum care and treatment in the twentieth century. As such, hypocritically, the issue of lunacy transformed into an issue of sanity. The obsession with seeing madness, of seeing the ‘other’, had made the mentally unwell and deficient effectively invisible in the asylum discourse. Advocacy figures such as Perceval, who had been scathingly honest in his brief mental lapse, were side-lined in favour for the more palatable narrative of the innocently wronged individual whose unimpeachable sanity had been questioned, such as that of Weldon or Merivale. Sensationalised fiction and vapid journalism only served to enhance this focus on the discourse of wrongful confinement rather than divulge into the much more controversial, and seemingly improbable, notion of humanising the mentally unwell. It seemed more palatable for Victorian society to fabricate and exaggerate conspiracies of wrongful confinement rather than to confront the fact that there were no inherent distinguishing characteristics between themselves and the ‘raving maniacs’ they were so afraid of. The lunacy panics like most epidemics only lasted a few years before fading into the popular subconscious and historical obscurity. That is not to say that the harmful stereotypes and stigma perpetuated by these panics did not remain entrenched in societal culture, but rather the rise of paranoia and hysteria eventually dimmed down. Victorian society can be understood as just as vapid and obsessive with popular culture and trends that their disparagingly short attention-span meant that from its inception, wrongful confinement had a set expiry date.[66] The oversaturation of wrongful confinement in the media meant there was an inherent limit on how much one individual could consume before becoming eventually bored with the same old used tropes. As such, by the end of the 1890s, wrongful confinement stories, placated by the legislature of the 1890 reform act, were no longer highly coveted in the media. They had fully run their course in the popular environment of gossip and scandal and no longer carried the ‘shock factor’ necessary to entertain the ravenous Victorian public. As such, while the obsessive flame of the lunacy panics burnt bright in the nineteenth century, they eventually extinguished themselves, fading into the anonymity of history obscurity. Sarah Brady has just completed a BSc in History and International Relations from University College Dublin. Notes: [1] Peter McCandless, ‘Liberty and Lunacy: The Victorians and Wrongful Confinement,’ Journal of Social History, Vol. 11, No. 3, (1978), p. 366. [2] Barbara Fass Leavy, ‘Wilkie Collin’s Cinderella: The History of Psychology and The Woman in White,’ Dickens Studies Annual, Vol. 10 (1982), p. 94. [3] Joshua John Schwiesco, ‘’Religious Fanaticism’ and Wrongful Confinement in Victorian England: The Affair of Louisa Nottidge,’ Social History of Medicine: The Journal of the Society for the Social History of Medicine, Vol. 9, Issue 2 (1996), p. 167. [4] Andrew Scull, Madness and Civilisation: A Cultural History of Insanity from the Bible to Freud, from the Madhouse to Modern Medicine, (London, 2015), p. 134. [5] Michel Foucault, Madness and Civilisation: A History of Insanity in the Age of Reason, (New York, 1965), p. 100. [6] Ida Macalpine and Richard Hunter, George III and the Mad-Business, (London, 1969), p. 277. [7] Sander L. Gilman, Disease and Representation: Images of Illness from Madness to AIDS, (London, 1988), p. 19 [8] Simon Cross, ‘Laughing at Lunacy: Othering and Comic Ambiguity in Popular Humour about Mental Distress,’ Social Semiotics, Vol. 23, No. 1, (2013), p. 2. [9] Jennifer Eisenhauer, ‘A Visual Culture of Stigma: Critically Examining Representations of Mental Illness,’ Art Education, Vol. 61, No. 5 (2008), p. 15. [10] Laura R. Kremmel, ‘The Asylum’ in The Palgrave Handbook of Contemporary Gothic, ed. Clive Bloom (New York, 2020), p. 451. [11] Simon Cross, Mediating Madness: Mental Distress and Cultural Representation, (New York, 2010), p. 51. [12] Jane Shepherd, ‘“I am Very Glad and Cheered when I hear the Flute,”: The Treatment of Criminal Lunatics in Late Victorian Broadmoor,’ Medical History, Vol. 60, No. 4, p. 484 [13] Andrew Scull, The Most Solitary Afflictions: Madness and Society in Britain 1700-1900, (New Haven, 1993), p. 110. [14] Commissioners in Lunacy, 15th Annual Report, (London, 1861), p. 84. [15] Elaine Showalter, The Female Malady: Women, Madness and English Culture 1830-1980, (London, 1987), p. 105. [16] Eisenhauer, ‘A Visual Culture of Stigma,’ p. 16. [17] Susanna Bennett, ‘Representation and Manifestations of Madness in Victorian Fiction,’ Published Dissertation, The University of Waikato, (Hamilton, 2015), p. 39. [18] Mark Jackson, ‘Images of Deviance: Visual Representation of Mental Defectives in Early Twentieth Century Medical Texts,’ The British Journal for the History of Science, Vol. 28, No. 3, (1995), p. 322. [19] Cara Dobbing and Alannah Tomkins, ‘Sexual Abuse by Superintending Staff in the Nineteenth Century Lunatic Asylum: Medical Practice, Complaint and Risk,’ History of Psychiatry, Vol. 32, No. 1, (2020), p. 75. [20] William Griggs, Lunacy Versus Liberty: A Letter to the Lord Chancellor, on the Defective State of Law, as Regards Insane Persons, and Private Asylum for Their Reception: With Remarks, Original and Select, including the Author’s Own Case and Other’s, (London, 1832), p. 5. [21] Sarah Wise, Inconvenient People: Lunacy, Liberty and the Mad-Doctors in Victorian England, (London, 2012), p. 60. [22] Cross, Mediating Madness, p. 58. [23] Jackson, ‘Images of Deviance’, p. 322. [24] Showalter, The Female Malady, p. 87. [25] Sharrona Pearl, ‘Through a Mediated Mirror: The Photographic Physiognomy of Dr Hugh Welch Diamond,’ History of Photography, Vol. 33, No. 3, (2009), p. 298. [26] Cross, Mediating Madness, p. 131. [27] Wise, Inconvenient People, pp. 395-399. [28] Peter McCandless, ‘Dangerous to Themselves and Others: The Victorian Debate over the Prevention of Wrongful Confinement,’ Journal of British Studies, (1983), Vol. 23, No. 1, p. 95. [29] Georgina Weldon, How I Escaped the Mad Doctors, (London, 1879), p. 22. [30] McCandless, ‘Dangerous to Themselves’, p. 85. [31] Marie Mulvey-Roberts, ‘Fame, Notoriety and Madness: Edward Bulwer-Lytton Paying the Price of Greatness,’ Critical Survey, Vol. 13, No. 2, (2001), p. 123. [32] Scull, The Most Solitary of Afflictions, p. 5. [33] McCandless, ‘Liberty and Lunacy’, p. 367. [34] R.A Houston, ‘Rights and Wrongs in the Confinement of the Mentally Incapable in Eighteenth Century Scotland,’ Continuity and Change, Vol. 18, No. 3, (2003), p. 374. [35] David Noble, ‘On Certain Residual Prejudices of the Convalescent and the Recovered Insane,’ Asylum Journal of Mental Science, Vol. 3 (22), (1857), p. 433. [36] Cristina Hanganu-Bresch and Carol Berkenkotter, ‘Narrative Survival: Personal and Institutional Accounts of Asylum Confinement,’ Literature and Medicine, Vol. 30, No. 1, (2012), p. 26. [37] Hanganu-Bresch and Berkenkotter, ‘Narrative Survival,’ p. 27. [38] Wise, Inconvenient People, p. 251. [39] Wise, Inconvenient People, p. 353. [40] Weldon, How I Escaped the Mad-Doctors, p. 5. [41] Weldon, How I Escaped the Mad-Doctors, p. 17. [42] Mary Madden, ‘Stories about a Storyteller: Reading the Radical in Scenes from the ‘Disastrous’ Life of Georgina Weldon,’ Women’s History Review, Vol. 15, No. 2, (2006), p. 215. [43] Weldon, How I Escaped the Mad-Doctors, p. 21. [44] Mulvey-Roberts, p. 123. [45] Weldon, How I Escaped the Mad-Doctors, p. 20. [46] Thomas Szasz, The Age of Madness: The History of Involuntary Mental Hospitalisation Presented in Selected Texts, (London, 1973), p. 89. [47] Wise, Inconvenient People, p. 63. [48] Showalter, The Female Malady, p. 104. [49] Ray Nairn, Sara Coverdale, John F. Coverdale, ‘A Framework for Understanding Media Depictions of Mental Illness,’ Academic Psychiatry, Vol. 35, No. 3, (2011), p. 204. [50] Helen Small, Love’s Madness: Medicine, the Novel and Female Insanity 1800-1865, (New York, 1996), p. 182. [51] Cristina Hanganu-Bresch and Carol Berkenkotter, Diagnosing Madness: The Discursive Construction of the Psychiatric Patient, 1850-1920, (Colombia, 2019), p. 37. [52] Christine L. Krueger, ‘Agency, Equity, Publicity: Compos Mentis in Charles Reade’s Hard Cash and Lunacy Commission Reports,’ Nineteenth Century Literature Criticism, Vol. 275, (2013), p. 185. [53] Wise, Inconvenient People, p. 199. [54] Small, Love’s Madness, pp. 182-5. [55] Wilkie Collins, The Woman in White, (Surrey, 2016: first published in 1860), p. 24. [56] Collins, The Woman in White, p. 31. [57] Charles Davies, Mystic London, (London, 1875), p. 40. [58] Davies, Mystic London, p. 51. [59] Andrew Wynter, ‘Lunatic Asylums,’ Quarterly Review 101, (1857), p. 375. [60] Charles Reade, Hard Cash, (San Jose, 2017: originally published in 1863), p. 160. [61] Anne Grisby, ‘Charles Reade’s Hard Cash: Lunacy Reform Through Sensationalism,’ Dickens Studies Annual, Vol. 25, (1996), p. 150. [62] John Charles Bucknill, ‘The Newspaper Attack on Private Lunatic Asylums,’ Journal of Mental Science, Vol. 5, Vol. 27 (1858), p. 146. [63] Bucknill, ‘The Newspaper Attack,’ p. 153. [64] George H. Savage, ‘Our Duties in Reference to the Signing of Lunacy Certificates,’ British Medical Journal, Vol. 1, Issue 1266, (1885), p. 692. [65] Wise, Inconvenient People, p. 376. [66] McCandless, ‘Liberty and Lunacy,’ p. 368.
- The Political Uses of Art Patronage in the Timurid and Safavid Empires
The Timurid and Safavid empires were vast and in perennial fluctuation. This meant that the leaders of these polities required highly effective tactics to exercise control over large swathes of land and propagate their unique ideologies to the substantial population. While most rulers liberally employed the military to ensure stability, they also opted to use several soft power techniques, the most prevalent of which was engagement in a broad programme of art patronage. Between the late fourteenth and seventeenth centuries, art not only served an aesthetic purpose, but was also frequently politicised. Many historians, such as Thomas Lentz and Chad Kia, argue that it is vital to consider artworks created during these centuries within their historical context and that they must be viewed from both stylistic and socio-political perspectives to develop a full understanding of their motifs. From Timur to Shah Abbas, rulers and elites used a combination of architecture, literature, and painting to bolster legitimacy, intimidate internal and foreign opponents by displaying the empire’s strength, illuminate the government’s religious ideology, and manipulate symbols from pre-Islamic and Islamic history to propagate conceptions of ideal kingship that were then applied to contemporary rulers. The way these mediums were utilised differed slightly due to religious and governmental differences between the Timurids and Safavids, but generally, artistic production was controlled by the government and served a political agenda. The two dynasties developed new styles of art that helped rulers and, to some extent, officials “articulate their monarchical claims, religious commitments and personal glory.” [1] The Timurids patronised art to magnify their achievements, combine recently adopted Perso-Islamic literary and visual traditions with Timurid culture, and codify images that were utilised to express government ideologies. Architecture was one of the mediums used to demonstrate the empire’s power. It had several advantages over painting and literature, the most significant being that it could reach an elite audience but also could be understood by the illiterate laity. Timurid leaders knew the “psychological dimension of lavish public architecture,” and therefore placed great importance on construction.[2] Timur commissioned many structures that stood out due to their size, as they were larger than anything that had been built previously.[3] For example, after conquering Delhi in 1398, he returned to Samarqand and commissioned the immense Bibi-Khanym Mosque as a monument to his successful conquest.[4] The leader patronised many additional structures in the capital, including a series of grandiose palaces and madrasas, to create unforgettable symbols of his far-reaching power that could be seen by both the citizenry and his foreign counterparts, or their liaisons, when they visited the city. Ruy Gonzalez de Clavijo, who had unprecedented access to Timur’s court, writes that the ruler “had such a strong desire to ennoble the city that he brought captives to increase its population, especially all those who were skilful at any art.”[5] The use of architecture to display strength continued after Timur’s death in 1405, and the medium’s use was soon expanded to be a way for kings and elites to illustrate their piety. For example, in an attempt to appease the Muslim population of his empire and show that he was willing to uphold shari’a law as well as yasa, Shahrukh engaged in a large amount of generous religious patronage, sponsoring many mosques during the course of his reign.[6] Patronage of literature was also very common in the Timurid world, especially in historiography and poetry. Historical writing and genealogies were used as legitimisation devices, with writers formulating versions of the past tailored to contemporary rulers’ needs. By the time of Shahrukh’s reign, Perso-Islamic styles of governance and art were dominant, but Turkic influences were still prevalent in some elite circles.[7] Timurid leaders patronised literature to maintain their Turko-Mongol heritage alongside the Islamic culture that they had adopted after conquering the Iranian plateau. For example, Shahrukh’s commissions show his interest in emphasising the dynasty’s Chinggisid heritage. The ruler ordered the creation of the Majma al-tawarikh of Hafiz-i Abru, which was a continuation of the Jami al-tawarikh and connected Timur’s tribe to the lineage of Chinggis Khan.[8] In this volume, the Timurids are portrayed as upholders of the Turko-Mongol tradition of their ancestors and as valid heirs to the Islamic kings, thereby justifying Shahrukh’s rule to the two main groups that comprised the government. Poetry was also patronised by Timur and his successors to keep the dynasty’s Turko-Mongol heritage alive despite increased Islamisation. Timur and Husayn Bayqara encouraged the composition of both Persian and Turkish literature, thus contributing to a blending of the two cultures that would please the empire’s diverse population.[9] Clavijo observes that “ameers, in the courts of the Timouride princes, while they studied the literature of Persia, did not neglect the poetry of their native Toorki.”[10] The preservation of these original cultural elements was a way to evoke the “glory days” in the dynasty’s history.[11] All of this literary patronage was part of a project of legitimisation in which Timurid powerholders needed to participate to effectively manage their territory. Historiography and poetry were heavily patronised by rulers to gain the support of the conquered population while still retaining the backing of the original Turkic groups that brought the Timurids to power. Paintings were commissioned to spread a certain image of kingship and to show a ruler’s vision for the state. This type of art was, like literature, often meant for the elite rather than the laity, with the exception of Timur’s wall paintings in the palaces of the Samarqand gardens. Ahmed ibn Arabshah, while not a very reliable source on other aspects of Timur’s reign due to his personal dislike of the ruler, does give a vivid description of these artworks; “he had depicted his assemblies and his own likeness, now smiling, now austere, and representations of his battles and sieges…in India, Dasht, and Persia and how he gained victory…”[12] The laity and the elite were able to view them as they walked through the gardens, and would have had the opportunity to stare in awe at this public assertion of strength commemorating Timur’s various conquests and illustrating his power.[13] On a smaller scale, book painters worked to propagate the ruling class’s religious beliefs. Timur established the kitabkhana, or royal library, which formulated a series of symbols that were then repeatedly employed to create an aesthetic through which the government ideology could be articulated.[14] Using what Chad Kia calls “figure-types”—small figures in the background of paintings that seem out of place or unnecessary—these royally-sponsored artists included references to Sufism in their works. The image of “Majnun on Layla’s Tomb,” commissioned by Husayn Bayqara, features characters associated with Sufism, and was meant to propagate the faith and encourage belief.[15] The 1487 Mantiq al-tayr, also completed during Bayqara’s reign, features Sufi symbols and representations of its practices illustrated by figure-types; the originality of this iconography being placed alongside Attar’s writing suggests that the patron had influence over the content of the manuscript’s illustrations, and decided to politicise the works of art.[16] During the reign of the Timurids, a plethora of paintings and book-art was produced that allowed rulers to aggrandise themselves and spread the government’s religious precepts to the general population and the elites. While these items were, of course, aesthetically beautiful, most also had a hidden political agenda. The Safavid programme of art patronage mirrored that of the Timurids, although naturally they sometimes had to make alterations to the Timurid model to fit contemporary needs. Architecture was employed in a similar way; to create an illusion of power that was accessible to both lay and elite audiences. Tahmasp emphasised his position as the rightful shah through this medium after wresting control of the government from the Qizilbash. He constructed many splendid buildings, such as the palaces at Qazvin, all of which were meant to show royal precedence over the Turkic tribes and other potential rivals.[17] Abbas I followed suit during his reign, constructing numerous palaces in cities such as Isfahan and Kashan.[18] Like Tahmasp, Abbas required ways to display his power to potential internal enemies, specifically the recently subdued Qizilbash amirs, and to outside powers such as the Ottomans. Some of his grandest projects were located in Isfahan, the new capital on which he lavished funds in order to signal the return of Safavid strength after the civil war of 1576 to 1590.[19]Architecture was also utilised to promote Shi’ism, and patronising a religious building allowed a ruler or a member of the elite to show piety. Tahmasp ordered the construction of many mosques and shrines during his reign, including a particularly large building in Ardabil that was built in the late 1530s.[20] Abbas I continued this trend; Iskandar Beg Munshi writes in his History of Shah Abbas that in “most provinces of the empire, he left monuments such as mosques, theological seminaries, pious foundations…” and many shrines dedicated to the Imams.[21] All of these new religious structures were meant to encourage an influx of visitors, and an increase in conversions. Additionally, the scale of the buildings shows a concentrated effort by these two shahs to rearticulate Safavid religious authority along Shi’i lines after presenting themselves as spiritual heads of the Safaviyya Sufi order became impossible.[22] Literature, more specifically poetry, was another type of art that was used to propagate Shi’ism. Tahmasp encouraged the composition of religious poetry; Munshi writes that the ruler once said he was “not willing to allow poets to pollute their tongues with praises of me; let them write eulogies of Ali and the other infallible Imams.”[23]Whether this is true or not, exhortations of the Imams became very common during the Safavid period as leaders sought to show their commitment to the religion by commissioning them instead of odes to their own achievements. This prevalence can also be explained by the fact that during Tahmasp’s reign, patronage of the medium started to be monopolised by theologians, and writers were granted little freedom of expression by the late sixteenth century.[24]This shows that most poetry written during the sixteenth and seventeenth centuries in Iran had an inherent religious slant, as the views of its principal sponsors certainly affected its content. Patrons of literature who were religious figures or government officials usually possessed a political agenda when they commissioned poetry, due to their position in society and goal to spread Shi’ism across the empire. As during the reign of the Timurids, historical writing of the Safavid period was often politicised, with leaders patronising the genre to legitimise their rule. Through these works, they were able to present their ancestors as Twelver Shi’is and fabricate connections between early Safavid leaders and the Timurids. The Shi’isation of the history of the Safaviyya order was of great importance to the shahs, as the early Safavid shaykhs were not Shi’i nor descended from the Imams, but kings like Tahmasp and Abbas wished to be able to make these legitimising claims due to contemporary political realities. They commissioned chroniclers to alter their dynasty’s history, mainly through imitative writing; a practice which involved basing a work on a previously published book, and then modifying parts of the original narrative to suit the patron’s specific requests.[25] Amir Mahmud’s texts and Munshi’s History, written during the reigns of Tahmasp and Abbas respectively, refer to connections between the Imams and the Safavid shahs to position these two rulers as descendants of defenders of the faith.[26] Mahmud changes the traditional historical narrative, portraying Shaykh Safi and Isma’il as devout Shi’is who ruled on behalf of the hidden Imam; Ismail had “arisen from the horizon of the progeny of that [Safi’s] heaven of the imamate,” and he had “come into existence through the spiritual assistance of that manifestation of magnanimity…” [27] Descriptions such as these allowed Safavid shahs to gain more authority in the eyes of the newly converted Shi’ite elites, and provided legitimisation through fabricated religious connections. Historiography was also employed to link the Safavids genealogically to one of the great dynasties of the Iranian plateau; the Timurids. After the civil war, Abbas required a new form of legitimisation besides religious justification of rule, and so he began to patronise texts that emphasised Timurid ties and appealed to the Qizilbash’s Turkic background. The shah commissioned Qazi Ahmed to write genealogical works and altered histories, the most influential of which is the Khulasat al-tawarikh, to draw connections between Timur and the leaders of the Safaviyya order.[28] Munshi also writes about Timurid associations in his History, another of Abbas’s commissioned works. The author describes several meetings between Timur and Sadr al-Din Musa, and claims that the former predicted the rise of the dynasty.[29] While these histories played an important role in the Safavid rulers’ programme of legitimisation, the literary works that were the most significant politically were Ferdowsi’s Shahnameh and Nizami’s Khamsa. Kishwar Rizvi hypothesises that the representation of kingship portrayed in these two manuscripts combines Shi’i iconography with historical Iranian ideas of authority to create a paradigm of power that would be accepted by the nomadic Turks and sedentary Tajiks.[30] The original intention of Ferdowsi’s Shahnameh was to celebrate Iran’s history and discuss the values that should be embodied by kings.[31] In the sixteenth and seventeenth centuries, the work took on a new role as a political treatise, since it addressed the topical issues of legitimacy and just leadership.[32] Rulers turned to the “ancient heroes of this work for inspiration and consciously attempted to emulate their achievements.”[33] The Khamsa served a similar purpose, although it was not as influential as the Shahnameh. Through careful examination of the paintings in the commissioned Safavid copies of the two works, it can be seen that the artists merged the historical and mythical figures of these texts with contemporary religion to legitimise themselves to a wide audience, thus strengthening their position. Tahmasp commissioned a two-volume copy of the Shahnameh which was completed in the 1530s, and many of the images indicate that this work was not created simply for aesthetic reasons. The shah’s likeness appears in many of the illustrations, such as “The Court of the Gayumars” and “Isfandiyar Slays Arjasp and Takes the Brazen Hold,” and he is thereby equated with the famous heroes present in these stories.[34]This implies that this Shahnameh was meant to propagate an image of Tahmasp as a strong ruler who embodied the ideal kingly values lauded in the text. Ferdowsi’s work also underwent Shi’ification; the originally Sufi Timurid figure-types were appropriated by the creators of Tahmasp’s Shahnameh to promote Shi’ism. The illustration of “Haftvad and the Worm” includes Shi’ite imagery; water carriers, who are associated with the martyrdom of Husayn, can be seen in the background of this image.[35] Painted after the king’s public repentance in 1533 and his subsequent prohibitions on irreligious behaviour such as drinking, this work was clearly meant to generate governmental support for his decrees. This particular Shahnameh was not only used as internal propaganda; Tahmasp gifted this sumptuous book to Selim II to emphasise the distinctly Shi’i, more pious identity of the Safavids to the Sunni Ottomans.[36] Abbas also patronised an illustrated manuscript of Ferdowsi’s work which, like Tahmasp’s copy, was intended for use as a tool to propagate the image of himself as both an ideal Iranian king and a pious Muslim. As was the case with Tahmasp’s copy, Shi’i elements were introduced into the illustrations of the originally Zoroastrian manuscript; images of people performing rituals were included to promote stricter adherence to the religion in the court and to encourage more conversions.[37] The paintings show Abbas as the embodiment of historical Iranian kingly values through “suggestive portraiture,” a highly idealised, non-naturalistic style that became popular during the early seventeenth century.[38] This type of art was useful to the shah, as he wished to portray himself as a ruler who was similar to the heroes described in Ferdowsi’s work. He patronised a copy of the Shahnameh for the same reason as other Safavid leaders; to articulate his conception of kingship, which combined historical ideas about justice and legitimacy with strict adherence to Shi’ism, and to spread this idea to the elite Turks and Tajiks in an effort to gain their support. Nizami’s Khamsa was the other important work of fiction that was employed as a propaganda device in the Safavid world. Tahmasp commissioned an illustrated copy, which was completed in 1543 and features Islamic imagery, again changing the Zoroastrian nature of the original manuscript. Shi’ite symbols are a common occurrence in illustrations patronised by Tahmasp; he was a very devout ruler who likely saw himself as the divinely ordained leader of a Shi’i state, and he wanted his government officials to share that view.[39] For example, the illustration of “The Battle between Khusrau Parviz and Bahram Chubina” features Safavid Shi’i iconography; some figures wear the taj, the signifying Qizilbash headgear, and Shi’i inscriptions appear on the banners.[40] Khusrau Parviz resembles Tahmasp in the work; this was part of a conscious effort to compare the ruler with this ancient king, who is also portrayed as Shi’i to cater to contemporary ideas of piety and enable the Safavid ruler to present himself as part of a line of great Iranian heroes and as a defender of Shi’ism. It can be inferred from the inclusion of Tahmasp and Abbas’s likenesses in the Shahnameh and Khamsa that it was common for Safavid leaders to equate themselves with the historical and mythical characters of these two works to gain political support from a population that considered these texts an integral part of its culture. There was also an increased need for leaders to show themselves as devout Muslims, hence the inclusion of Shi’i imagery in Zoroastrian texts. These manuscripts posited the idea that ideal kingship was a combination of the traditional, pre-Islamic values that were espoused by the authors of the Shahnameh and Khamsa, and pious Shi’ism, thus creating a concept that would have appealed to the entire population of the empire. Patronage of pre-Islamic manuscripts for political reasons was not practised solely by Safavid rulers; Ibrahim Mirza commissioned an illustrated edition of Abdul-Rahman Jami’s Haft awrang after his uncle Tahmasp’s second repentance in 1556, and it is obvious that this event influenced the imagery used in Mirza’s copy of the work. For example, the image of “A Depraved Man Commits Bestiality” contains religious iconography meant to illustrate support of the shah’s new bans.[41] Gypsies, minstrels, antinomian Sufis, and clowns are placed at the centre of the image, even though they are only figure-types and therefore are not mentioned in Jami’s writing. By focusing on them, the artist encourages comparison between these minor characters and the depraved man described in the Haft awrang.[42] Given the fact that laws against the people represented by the figure-types were enacted in the late 1550s because they were considered impious, this work should be regarded as propaganda made to show concurrence with Tahmasp’s decrees. The use of art patronage as a political tactic was widespread and was not confined to the rulers. Miniatures commissioned by elite members of government were rarely simply illustrations, but were also concerned with glorifying their patrons and showing agreement with the shah’s ideology in an effort to gain his support. Despite the vast amount of art commissioned for political purposes during the fifteenth through seventeenth centuries, it must be noted that some artworks meant more to the patron than in a socio-political context. Before Shahrukh, little independent patronage was allowed, but after he ascended the throne, the princely courts became more culturally active and artists began to produce works that represented their individual tastes.[43] Around the mid-fifteenth century, a strong tradition of apolitical art began to evolve alongside its continued use as propaganda. For example, Muhammad Juki patronised a copy of the Shahnameh that does not feature any discernable political images. The patron had some choice in the subject matter of the illustrations, and topics were often selected that celebrated him and his ancestors.[44] Elites also commissioned manuscripts featuring images that corresponded to their current situations; Juki chose to illustrate several uncommon parts of the Shahnameh, such as the story of Qubad, which were meant to address the problems he was facing at court and, according to Barbara Brend, were probably used for contemplation.[45] These artworks were deeply personal items, and the illustrations were chosen only because they had significance to the patron. The ruler and politically inclined members of the government, however, remained the largest sponsors of art in the Timurid and Safavid periods, and therefore much of the art produced during these centuries was political. Art patronised during the Timurid and Safavid period obviously served an aesthetic purpose, but when analysed within its historical context, much of it gains an additional layer of political meaning. The patronage of architecture, literature and painting for political reasons was a fundamental part of governmental and court culture, and art was usually not commissioned solely for its visual appeal. Sponsorship of the arts in both empires was monopolised to a large extent by the royal family and elite members of government; it was a soft power tactic that enabled leaders to gain more control over their expanding domains, and allowed other officials to curry favour with the ruler and enhance their reputations. Architecture was used to influence members of government and the general population, as it could be seen and understood by all, while political literature and painting were usually restricted to an elite audience due to the absence of literacy among the laity. Altered histories and genealogies served as legitimisation devices, while poetry and paintings of various types blended pre-Islamic notions of kingship with contemporary religious trends, allowing leaders to aggrandise themselves and build their narrative of legitimacy in empires with diverse populations and different cultural affinities. Dorothy Greene is in her 4th year of an MA in Middle Eastern Studies and Persian at the University of St. Andrews. Notes: [1] Thomas Lentz and Glenn Lowry, Timur and the Princely Vision: Persian Art and Culture in the Fifteenth Century (Los Angeles, 1989), p. 20. [2] Robert Hillenbrand, ‘Safavid Architecture’, in Peter Jackson (ed), Cambridge History of Iran, Vol 6 (Cambridge, 1986), p. 825. [3] Lentz and Lowry, Timur, p. 43. [4] Ruy Gonzalez de Clavijo, Narrative of the Embassy of Ruy. Gonzalez de Clavijo to the court of Timour, at Samarcand, trans. Clements R. Markham (Cambridge, 1859), p. xlviii. [5] Ibid, pp. 170-171. [6] Beatrice Forbes Manz, ‘Temur and the Problem of a Conqueror’s Legacy’, Journal of the Royal Asiatic Society, 8:1 (April 1998), p. 35. [7] Lentz and Lowry, Timur, pp. 74, 78. [8] Beatrice Forbes Manz, ‘Temur and the early Timurids to c. 1450’, in Nicola di Cosmo (ed), The Cambridge History of Inner Asia (Cambridge, 2009), p. 195; Lentz, Timur, p. 99. [9] Clavijo, Narrative, p. lii. [10] Ibid. [11] Lentz and Lowry, Timur, p. 270. [12] Ahmed Ibn Arabshah, Tamerlane, or Timur the Great Amir, trans. J.H. Sanders (London, 1936), p. 310. [13] Ibid, p. 310. [14] Lentz and Lowry, Timur, p. 50, 206. [15] Chad Kia, Art, Allegory and the Rise of Shi’ism in Iran, 1487-1565 (Edinburgh, 2019), p. 54. [16] Kia, Art, p. 96. [17] Iskandar Beg Munshi, History of Shah ‘Abbas the Great Volume 1, trans. Roger Savory (Colorado, 1978), p. 205. [18] Ibid, p. 537. [19] Stephen Blake, ‘Shah ‘Abbas and the Transfer of the Safavid Capital from Qazvin to Isfahan’, in Andrew J. Newman (ed), Society and Culture in the Early Modern Middle East: Studies on Iran in the Safavid Period (Leiden, 2003), p. 151. [20] Andrew J. Newman, Safavid Iran: Rebirth of a Persian Empire (London, 2006), p. 32. [21] Munshi, History, pp. 535-538. [22] Newman, Safavid Iran, p. 57. [23] Munshi, History, p. 275. [24] Z. Safa, ‘Persian Literature in the Safavid Period’, in Peter Jackson (ed), Cambridge History of Iran, Vol 6 (Cambridge, 1986), pp. 954-955. [25] Sholeh A. Quinn, Historical Writing During the Reign of Shah Abbas: Ideology, Imitation, and Legitimacy in Safavid Chronicles (Salt Lake City, 2000), pp. 34, 64. [26] Munshi, History; Quinn, p. 74. [27] Quinn, Historical, p. 74. [28] Ibid, pp. 87-88. [29] Munshi, History, p. 16. [30] Kishwar Rizvi, ‘The Suggestive Portrait of Shah ‘Abbas: Prayer and Likeness in a Safavid “Shahnama”’, The Art Bulletin, 94:2 (June 2012), p. 234. [31] Barbara Brend, Muhammed Juki’s Shahnamah of Firdausi (London, 2010), pp. 8-9. [32] Lentz and Lowry, Timur, p. 126. [33] Ulrike Al-Khamis, ‘Khusrau Parviz as Champion of Shi’ism? A Closer Look at an Early Safavid Miniature Painting in the Royal Museum of Edinburgh’, in Bernard O’Kane (ed), The Iconography of Islamic Art (Edinburgh, 2005), p. 206. [34] Sheila Blair, ‘Reading a Painting: Sultan-Muhammad’s The Court of the Gayumars’, in Hani Khafipour (ed), The Empires of the Near East and India: Source Studies of the Safavid, Ottoman, and Mughal Literate Communities (New York, 2019), p. 528. [35] Kia, Art, pp. 144-145. [36] Blair, ‘Reading a Painting’, p. 531. [37] Rizvi, ‘Suggestive Portrait’, p. 230. [38] Ibid, p. 244. [39] Al-khamis, ‘Khusrau Parviz’, pp. 202, 205. [40] Ibid, p. 202. [41] Kia, Art, p. 153. [42] Kia, Art, p. 156. [43] Manz, ‘Temur’, p. 195. [44] Brend, Muhammed Juki, p. 137. [45] Ibid, p. 139.
- Gendered Slavery: the influence of gender in shaping slaves' narratives and experiences
When reading the narratives of formerly enslaved people, there are distinct differences in both the form of their narrative styles and the content of their narratives depending on their gender. The term ‘gender’ here is used very cautiously as it must be recognised that enslaved people were not considered to possess any gender as they were simply units of production (except for the ability to reproduce which was based entirely on one’s sex organs, not their socialisation). However, while this is the case in the narrative of the European enslavers and society of the time, we cannot deny the humanness and the difference in experiences between enslaved men and women. In their narratives alone, we see key differences in what they choose to focus on and what they choose to centre their narratives around. We will start by exploring these themes and if they are consistent through different narratives within genders. Next, we will probe as to why the narratives are different in their themes, placing them within their wider contemporary societies. Lastly, we will analyse the content of the narratives, the fundamental differences in the typical lives of enslaved males and females and how they are imperative to our understanding of the differences in their lives and experiences. Through the exploration of the narratives of Mary Prince, Oladuah Equiano and other important slave narratives, we will see how experiences of work, relationships and difficulties were different across the genders and how they influenced their narrative voices. The themes in the narratives are starkly contrasted by the gender of their author. While there are many differences in narratives between genders, Winifred Morgan argues that the principal difference is that each gender places greatest importance on different aspects; while male slave narratives place great importance on literacy and the independence that comes with it, female slave narratives tend to centre around their relationships and how they were the key influence of their decisions and actions. On the subject of male slave narratives, Morgan reflects on how ‘Oladuah Equiano, James Pennington and William Craft stress how illiteracy disabled them while they were slaves, to satisfy as soon as possible, their hunger for education.’ [1]Morgan’s analysis holds probity as this theme is consistent within Equiano’s narrative as well as that of Frederick Douglass. While Equiano notes repeatedly how he “had long wished to be able to read and write,’[2]and later, ‘I always had a great desire to be able to at least read and write,’[3] Douglass also states how he ‘set out with high hope, and a fixed purpose, at whatever cost of trouble, to learn how to read.’[4] Both male narratives show an importance in their literacy, which is further cemented by the fact that both authors in this circumstance were able to compose their own narratives in the written word, as opposed to most of the female narratives which were transcribed (typically by white abolitionists) even if they did have the capacity to write, such as Mary Prince. Female narratives, in contrast, centred themselves around the relationships in their life and the decisions they made as a result. In Mary Prince’s narrative, she speaks greatly of her mother, siblings and later her husband. On the second page of the narrative, she recalls ‘when I kissed my mother and brothers and sisters, I thought my young heart would break,’[5] highlighting the highly emotional aspect of being sold away from her loved ones. Rachel Banner notes how ‘after the sale, the Mary of the text compulsively circles the memory of her family’s separation,’[6] and this can be seen where Prince states “my thoughts went back continually to those from whom I had been so suddenly parted.’[7] Later in the narrative, her primary goal is to return to her husband: “I still live in the hope that God will find a way to give me my liberty, and give me back to my husband.” Where her early life saw the continual remembrance of her family, her later life sees her longing to be with her husband. This emotional tie to the relationships of the enslaved women is also visible in Scenes in the Life of Harriet Tubman: “my home, after all was down in Maryland; because my father, my mother, my brother, and sisters, and friends were there. But I was free, and they should be free.”[8] Tubman’s narrative, although centring around her own story and her quest to free others, returns to the memory of her family and friends and her desire to free them also. We see this also through the quotation from Prince in the title as she claims to ‘know what other slaves feel,’ tying herself to the suffering of her counterparts. Although Morgan argues that female slave narratives take on more of a familial centrality and males tended to focus on individuality,[9] this is not necessarily exclusive to the enslaved females. In Frederick Douglass’ narrative about the desire for freedom, he states ‘I was not willing to cherish this determination alone. My fellow-slaves were dear to me.’[10] Contrary to Morgan’s argument, although Douglass holds a great importance on literacy as previously stated, he also holds his counterparts in high regard and, in this circumstance, did not wish to endeavour for freedom alone, suggesting that the importance of relationships is not exclusive to a gender but rather an important feature of the narratives of the enslaved. Yet, Morgan contests this and allows for some flexibility in her argument as she states that ‘part of the appeal of the Narrative is Douglass’s invocation of the twin but opposing American themes of individualism and community.’[11] Douglass uses the two contrasting notions to place himself in this classic ‘American’ struggle. Jennifer Morgan suggests that Americanism was intrinsically linked to the idea of individualism and that it was ‘enmeshed in the intersectionality of discourses about race and gender. As an individuated “American” self came into being, key notions of master over property were mobilized that defined both whiteness and masculinity.’ [12] By cleverly placing himself in the struggle between individuality and community, Douglass challenges the norms of ‘whiteness’ and ‘masculinity’, attempting to claim a place in free society. As we investigate further, it becomes clear that a reason for the decision for the use of the themes of individualism and community is to appeal to the predominantly white audience at the time of the narratives’ writing. Narratives such as those of Mary Prince and Oladuah Equiano hold fast to their narratives in achieving their purpose of narrating the horrors they faced and to establish respect within free society. Between genders, there were differences in social convention that their white audiences required from them in order to empathise with them. Morgan asserts that ‘both male and female fugitives and ex-slaves strove to counter the racial stereotypes that bound them even in ‘free societies.’ [13] While formerly enslaved men attempted to combat the stereotype of being uncivilised and unlearned, formerly enslaved women resisted the stereotype of being victims or sexual deviants. For this reason, male narratives such as Equiano’s and Douglass’ focus on their desire for and then, acquisition of literacy as it places them among educated male society. On Equiano, Carretta notes that ‘any autobiography is designed to influence the reader’s impression of the author, and often, as in the case of the Interesting Narrative, to affect the reader’s beliefs or actions as well.’ [14] Carretta’s view supports the notion that the narratives are written with intent to make the white society, who were possibly ignorant to the true nature of enslavement, aware and sympathetic to the trials of the enslaved, which they are more likely to be if they have any amount of respect for the person in which the narrative has come from; as Carretta further adds, ‘manumission necessitated redefinition.’ Similarly, female narratives were also dictated by social convention in that respectable women were to be pure, maternal and feminine. However, it was almost impossible for an enslaved woman to have any of these characteristics as the system built around enslavement prevented them from doing so; sexual violence was commonplace against enslaved women, babies were taken from mothers very early on to prevent maternal ties and female field workers were expected to do the same work as men. As a result, the narratives of enslaved women endeavour to demonstrate their womanhood through other means such as their care for others. For example, Prince’s adoration of both her mother and even her early mistress, Miss Betsey demonstrates her familial ties to those around her. This is also the case in Sojourner Truth’s narrative which describes her as a mother of five children, but also that ‘she rejoiced in being permitted to be the instrument of increasing the property of her oppressors!’ [15] The authors of Truth’s narrative here attempt to show both her maternal womanhood and, simultaneously, the bleakness of her state of bondage. This description is an impeccable representation of the debate between critics as to the authenticity and voices of slave narratives, particularly female narratives. We can question the true source of the quote as, even though the narrative claims to have been dictated from Truth, it is written in the third person when speaking of Truth’s life and in the first person when giving opinions and criticisms, suggesting that there is more than just an account of Truth’s life in the narrative. Similarly, in The History of Mary Prince, there exists a debate amongst historians as to the excessiveness of the involvement of Thomas Pringle, the editor. Pringle’s preface states how the narrative was ‘pruned into its present shape; retaining, as far as was practicable, Mary’s exact expression and peculiar phraseology.’[16] Despite this claim, Allen suggests that ‘Pringle’s preface reveals the ways in which racism and imperialism influenced the narrative and how the narrative, in turn, reflects these social realities.’[17] While Pringle assures the reader of his limited alteration of the narrative, one is forced to question his use of words such as ‘peculiar’ to describe the phraseology of how it was dictated from the supposed true author. When considering the target audience to be the white English society of the early to mid 1830s, and the printing of the narrative in an anti-slavery pamphlet alongside the narrative of Asa-Asa, one can denote that the narrative’s purpose was to aid the abolitionist movement in enlightening the likely otherwise uninformed, white non-abolitionists. This begs the question as to what was changed to fit the purpose from Prince’s initial narration. While this could also be the case with male slave narratives, there is a differentiation as many of the male narratives such as Equiano, Douglass and Northup were written with their own hand and not dictated, thereby eliminating a voice of the transcriber that the female narratives had to abide by. Lastly, aside from the form and voices of the narratives, it is important to analyse the differences in their content as the experiences of enslaved men and women were inherently different. While both groups experienced the horror of the draconian practices involved in enslavement, enslaved women had the added brutality of sexual violence. As Davis and Gates suggest, ‘she suffers all that the male suffers, and in addition miseries peculiar to herself.’[18] In Harriet Jacobs’ narrative, she says, ‘my master met me at every turn, reminding me that I belonged to him.’[19] Likewise, Mary Prince describes how her master ‘had an ugly fashion of stripping himself quite naked, and ordering me then to wash him in a tub of water. This was worse to me than all the licks.’ [20] Neither of these accounts are explicit in nature yet still demonstrate the trauma that these experiences gave to the enslaved women. On this, Altink notes how American feminist scholars from the 1980s determined that ‘sexual exploitation of bondswomen was as much a means of control as the whip and it made female bondage worse than male bondage.’ [21] One can ascertain that the reason for the lack of explicit detail relates back to the objective to be respected in a society that only regards ‘pure’ women as respectable. Aside from sexual violence, enslaved women were also valued lower than their male counterparts despite possessing the ability to reproduce, which became vital in the early nineteenth century after the influx of enslaved peoples was ceased. Berry comments on how ‘women were valued for their fecundity, and traders made projections based on their “future increase,”’[22] and how ‘women’s capacity to bear children, their labor skills, and, in some cases, their (perceived) physical attractiveness remained their primary factors in their inspections, valuations, and sales.’ [23] The differences in the valuation of enslaved males to females shows us how, despite theoretically not belonging to specific binary genders, the attributes that an enslaved person had ultimately determined their value to an enslaver: certain attributes that can only be possessed by a female. In turn, this impacted their experiences as enslaved people which was reflected in their narratives. In conclusion, narratives from formerly enslaved men and women differ in their themes, content and authenticity. Themes in male slave narratives tend to focus on the power of literacy and the individuality that comes from it, while female narratives concentrate more heavily on familial relationships. There are some exceptions to this binary distinction, however such as Douglass and his desire to obtain liberty with his ‘fellow slaves.’ A significant reason for these choices of predominant themes was to appeal to their white audiences in terms of respectability and empathy. Their stories were more likely to be appreciated and accepted if they had elements that the white audiences could relate to and respect. For males, this came in the form of demonstrating their literacy and displaying the classic struggle for individualism versus community, especially in the case of North America. For females, there was a greater importance on their roles as mothers, wives and daughters and, where this was not possible due to the institution of enslavement, their ties and care for those around them. The existing debate amongst critics is that of the narrative voices present and how much comes from the authors themselves and how much is added by transcribers and editors. While both male and female slave narratives go through a process of editing which is altered by discretion of the editor, many of the female slave narratives must have an added silent voice of the transcriber. Finally, a principal difference in the narratives between enslaved men and women is that content itself. While both groups experienced horrors, the added trauma of sexual violence and being used as units of reproduction is present throughout the female slave narratives, proving themselves to be quite different from those of the males. Japneet Hayer is currently doing a BA in History and Hispanic Studies at the University of Nottingham (3rd year). Full question when assigned: How does the personal 'slave narrative' of an African woman such as Mary Prince differ from that of Oladuah Equiano, etc.? Notes: [1] W. Morgan, ‘Gender-Related Difference in the Slave Narratives of Harriet Jacobs and Frederick Douglass’, American Studies, 35/2,(1994), p. 76 [2] The Interesting Narrative of the Life of Oladuah Equiano, Oladuah Equiano, (London, 1789), p. 133 [3] Ibid. p. 171 [4] Narrative of the Life of Frederick Douglass, an American Slave, Frederick Douglass, (Boston, 1845), p. 34 [5] The History of Mary Prince a West Indian Slave, Strickland, S., dict. M. Prince, ed. T. Pringle (London, 1831), p. 2 [6] R. Banner, SURFACE AND STASIS: Re-reading Slave Narrative via “The History of Mary Prince, in Callaloo, 36/2, 2013, p. 306 [7] Strickland, S., The History of Mary Prince: A West Indian Slave, pg. 5 [8] Bradford, S.H., Scenes in the Life of Harriet Tubman, dict. H. Tubman, (New York, 1869), p. 20 [9] Morgan, Gender-Related Difference in the Slave Narratives of Harriet Jacobs and Frederick Douglass, p. 83 [10] Frederick Douglass, Narrative of the Life of Frederick Douglass, an American Slave, p. 83 [11] Morgan, Gender-Related Difference in the Slave Narratives of Harriet Jacobs and Frederick Douglass, p. 80 [12] J.L., Morgan, Labouring Women: Reproduction and Gender in New World Slavery, (Philadelphia, 2004), p. 73 [13] Morgan, Gender-Related Difference in the Slave Narratives of Harriet Jacobs and Frederick Douglass, p. 76 [14] V. Carretta, ‘Oladuah Equiano: African British abolitionist and founder of the African American slave narrative’, ed. Audrey Fisch, The Cambridge Companion to the African American Slave Narrative, (Cambridge, 2007), p. 46 [15] O. Gilbert, Narrative of Sojourner Truth, A Northern Slave, Emancipated from Bodily Servitude by the State of New York, in 1828, dict. S. Truth, (Boston, 1850), p. 37 [16] Strickland, S., The History of Mary Prince: A West Indian Slave (1831), p. i [17] J.L., Allen, ‘PRINGLE’S PRUNING OF PRINCE: The History of Mary Prince and the Question of Reputation’, Callaloo, 35/2, 2012, p. 510 [18] C.T. Davis & H.L. Gates, The Slave’s Narrative, (Oxford, 1985), p. 22 [19] H. Jacobs, Incidents in the Life of a Slave Girl, ed. Lydia Maria Francis Child, (Boston, 1861), p. 46 [20] Strickland. S, The History of Mary Prince: A West Indian Slave (1831), p. 13 [21] H. Altink, Deviant and dangerous: Pro-slavery representations of Jamaican slave women’ sexuality, c. 1780-1834, 2005, p. 271 [22]D.R. Berry, The Price for Their Pound of Flesh: The Value of the Enslaved, from Womb to Grave, in the Building of a Nation, (Boston, 2017), p. 23 [23] Ibid. p. 25
- Khrushchev's new image of leadership post-Stalin
Introduction: In the wake of Stalin’s death, the Soviet Union witnessed a power struggle at the top of the Central Committee. This saw Khrushchev, ultimately, prevail and the removal of the old powers such as Molotov and Malenkov. During his premiership, Khrushchev sought to redefine leadership and de-Stalinise the Soviet Union till he was, eventually, ousted in 1964. Rather ironically, he was accused by the Party of attempting to cultivate a personality cult like that of Stalin. In his time as leader, he took advantage of the unique cultural moment of the late 1950s and early 1960s. A replacement for Stalin was needed to stabilise the turmoil his death brought. Khrushchev made use of technological advances to present a new, accessible leader. He had unprecedented access to mass media, broadcasting and travel compared to those before him, and he employed this to redefine leadership in the post-Stalin period. In response to the trauma the Soviet Union had faced, a more liberalised atmosphere was required, and Khrushchev understood promoting this to an extent would consolidate his popularity. This does not, however, mean he turned to democracy, and there were limitations. Khrushchev saw himself as the cultural authority and this is seen through analysis of three facets of his image: The Uncle, the Art Critic and the Travelling Salesman. Considering works by the likes of Condee and Larson together with contemporary newspaper publications and Khrushchev’s speeches, it is clear he took advantage of a changed society to present a new, more human image of leadership.[1] The Uncle: In the post-Stain period, Khrushchev knew it would be impossible to recreate the powerful paternal role Stalin had crafted. To create a new image of leadership and separate himself from the atrocities of Stalin’s rule, while still maintaining the same level of authority, Khrushchev sought to establish himself as ‘The Uncle’ figure. First, he used of the Twentieth Party Congress in 1956 to create an explicit break in leadership. The Secret Speech was ‘a condemnation of Stalin the person and not of the Stalinist system’, and here Khrushchev began to redefine leadership in front of the Party.[2] By pointing to ‘a grave abuse of power by Stalin’ and ‘the use of the cruellest repression, violating all norms of revolutionary legality’, Khrushchev sent a clear message of de-Stalinisation.[3] McCauley suggested possible reasons for him giving this speech were to either lift the sense of fear and allow the now effective party to transform the world, or to undermine the credibility of his rivals.[4] Either way, the speech acted as a dawn of a new era of leadership, of apparent transparency and openness. Fedor Burlatskii, who held a position on the Central Committee in the late 1950s, wrote ‘Khrushchev appeared precisely as the people’s hope, the precursor of a new age’, suggesting his de-Stalinisation efforts, including the Secret Speech, were his biggest legacy.[5] This considered, the importance of public address in the cultivating of Khrushchev’s new leadership cannot be understated. As Condee argued, the speech marked a ‘lateral shift in the dynastic pattern, making the innocence of the uncle in the sins of the father’.[6] By disavowing Stalin, without discrediting the Stalinist system, Khrushchev established his authority without having to take the burden of historical cruelties. Carlson argued ‘Khrushchev was a warm, folksy contrast to Stalin, who was aloof and vengeful as the Old Testament God’.[7] This was certainly the image he cultivated in the public sphere. Having consolidated his new leadership within the Party, his made use of the press to present a more hands on, personable leader rather than the removed father figure of Stalin. Comparing the leaders’ presence in Pravda illuminates Khrushchev’s accessibility to the press. In September 1964 two thirds of issues contained at least one photo of Khrushchev compared with one of Stalin in September 1954.[8] Though this increase could be put down to a general increased accessibility to photography, there were also a larger number of Khrushchev’s works published. Seven of his addresses to USSR audiences and an interview with a Japanese delegation were released in the magazine in one month alone.[9] This was a marked change from Stalin, who Larson argued made ‘a conscious decision of propaganda strategy not to associate … closely with domestic and foreign policy moves of transient significance’.[10] This increased presence in media, showing Khrushchev’s direct involvement in affairs, then, can be taken as an active attempt to present a new leadership. He used the news to style as the spokesman of the Soviet government, rather than a removed, god-like being. The Art Critic: Another facet of Khrushchev’s public image was the Art Critic. His role as the critic showed him to be an active part of cultural policy as well as emphasised the growing, although still limited, diversity of culture. By publicly asserting himself as the ‘premier art critic and first among the viewers’, Khrushchev juxtaposed himself against Stalin as ‘the supreme architect of Socialist realism’, Reid suggested.[11] The 1962 Manège Affair is the clearest example of Khrushchev ‘The Art Critic’. The 30 Years of MOSKh exhibition, Condee argues, shows he cast aside Stalin’s image of the ‘great scholar’ and cultural authority, instead embracing ‘the identity of the cultural essayist of his time’.[12] The leader’s visit was highly publicised, as shown in Figure 1, and he used this to display his active role in culture.[13] He critiqued much of the artwork displayed and, famously, lost his temper over the abstract content. Khrushchev told one artist ‘it is a pity, of course, that your mother is dead, but maybe it’s lucky for her that she can’t see how her son is spending his time’.[14] He told another ‘We should take down your pants and set you down in a clump of nettles until you understand your mistakes. You should be ashamed. Are you a pederast or a normal man?’.[15] This rawness of opinion being expressed so vocally and publicly assisted Khrushchev in moving away from Stalin’s image of leadership. The content of his commentary also helped cultivate a new image of leadership. As the above examples indicate, Khrushchev was not opposed to direct and harsh criticism. Shakhnazrov argued he had a ‘popular wisdom and peasant cunning… simplicity and openness in interactions with people’ but a primitive art taste and little interest in serious music or painting.[16] His use of language and temper did much to humanise him in comparison with the more removed Stalin. In many cases he was open about his lack of artistic understanding, at the Manège Exhibit declaring one artwork looked ‘as though some child had done his business on the canvas’ and ‘I don’t understand Picasso’.[17] Through use of common, sometimes inappropriate language, Khrushchev styled himself as, what Shakhnozrov described as, ‘the peasantry on the throne’.[18] He established himself as “the people’s” authority on art, suggesting if he did not understand, neither would anyone else. Condee argued more is known about Khrushchev’s views on art than any other leader and that he had ‘a public and official love for the provisional, the extemporaneous, the profane and the essayistic’.[19] This reveals much about his methods of cultivating an image of an accessible, human leadership. He styled himself as folksy and “of the people”, aiming to get his opinions to the public in a manner they would identify with. In this way, he took advantage of the increasingly liberalised atmosphere in the wake of Stalinism to establish his own brand of authority through media. The Travelling Salesman: The third avenue through which to analyse Khrushchev’s new image is through his presentation as ‘The Travelling Salesman’ of Socialism. Larson argued his ‘image was that of an active, earthly human leader who travelled widely… who voiced ideas and politics on a wide range of questions, who served as a spokesman of the regime’.[20] Of particular importance were the American Exhibition in Moscow and his trip to the United States in 1959. These instances show how Khrushchev used media opportunities to project an image of a new open and more human leadership, both at home and internationally. Hixson argued ‘although well-conceived and smoothly executed the Soviet Exhibition had little hope of fundamentally altering American mass perceptions of the USSR’, it is clear this is not entirely true of Khrushchev’s personal tour of America.[21] The press coverage generated was crucial to the establishment of a new, accessible, leadership within the Soviet Union, and of respectable, peaceful government internationally. Pictures like Figure 2 were circulated and episodes like this did much to boost Khrushchev’s positive reception in America.[22] During the trip, his personality shone through and allowed many Americans to view him more favourably. Cracking jokes like “see no horns” and bringing his family along with him meant the foreign press portrayed him as a human face of socialism, with a Gallup poll saying half of Americans approved of his invitation to the States.[23] While there was some hostility, Khrushchev managed to captivate people’s interest. Saul Pett wrote in the New York Times ‘we have been everywhere and done everything with Nikita S. Khrushchev. We have chased him… we have seen him tickle pigs, kiss babies’.[24]These examples show how Khrushchev made use of the specific, cultural, moment to ensure he had a new image of leadership post-Stalin. He used mass media and technological advancements to insert himself into domestic life as well as public, with millions watching television shows like Mr. Khrushchev Abroad on ABC in America.[25] Instead of an all-seeing, removed, father figure, he was an accessible, passionate leader. Not only did this visit sell Khrushchev to the Americans but it also helped his reception at home. Garthoff argued ‘he saw his reception in America (he received full honours as head of state although he did not hold that title) as according himself personally, and by extension the USSR, equality with President Eisenhower and the United States’.[26] In this way, his ‘proactive, erratic diplomacy’ worked to send a message to the Soviet people that he, and therefore they, were respected internationally.[27] The “Kitchen Debate” is another example of Khrushchev using press and broadcasting to cultivate new images of leadership. The impromptu exchange with Vice-President Nixon at the American Exhibition in 1959 gained much publicity, being described by Time magazine as ‘peacetime diplomacy’s most amazing 24 hours’.[28] Khrushchev took advantage of the opportunity to corner the Vice-President and debate the merits of their ideologies in front of the world’s press, with Reid arguing ‘in the context of “peaceful economic competition” the kitchen and consumption had become a site for power plays’.[29] Amid the sometimes tense, sometimes comical back and forth, Khrushchev declaring ‘I know that I am dealing with a good lawyer… You are a lawyer for capitalism, and I am a lawyer for communism’, there is a particular emphasis on media.[30] The pair agreed to broadcast the episode in their respective countries, ensuring their messages reach the public. In doing so the importance of new technology in Khrushchev’s public image was clear. He saw personal media appearances as central to presenting himself as an accessible, human figure but also to showing his strength, holding his own against the Western leaders. This, in turn, reflected the greatness of the Soviet regime. Conclusion: In conclusion, Khrushchev’s ascension to leadership came at a cultural moment that allowed him to cultivate an open and human image in the post-Stalin period. Knowing he would be unable to replicate a Stalinist authority, he made use of new media technologies, like mass press and television, to present a different type of leadership both nationally and internationally. Technological changes meant it was more possible than ever assert himself in all spheres of life. In this sense, Khrushchev’s self-presentation as passionate and powerful was extremely tactical, despite what episodes like the shoe-banging incident at the United Nations in 1960 suggest. Following the great hardships within the Soviet Union, it was clear there needed to be a change in leadership direction, while maintaining the authority of socialist ideology. By becoming the Uncle, the Art Critic, the Travelling Salesman, Khrushchev established himself as different from Stalin, as present and human. He allowed increased press coverage and genuine, uncontrolled public appearances to spread this message. Internationally, Khrushchev used his visits to the US and outbursts to create a personable image of Soviet leadership, creating a more favourable image of the USSR and socialism abroad. At the hight of the Cold War and Nuclear Brinkmanship, attempting to move away from total-war rhetoric and promote peaceful coexistence through leadership, was a clever and tactical move. Daisy Gant has just completed her 3rd year of a BA in History at University College London (with a year abroad at the University of Pennsylvania). Full title when assigned: How did Khrushchev cultivate a new image of leadership in the post-Stalin period? Notes: [1] N. Condee, 'Cultural Codes of the Thaw' in W. Taubman, S. Khrushchev & A. Gleason (eds.), Nikita Khrushchev (New Haven: Yale University Press, 2000), pp. 160-176; T. B. Larson, Dismantling the Cults of Stalin and Khrushchev. The Western Political Quarterly, Vol. 21, No. 3 (1968), pp. 383-390. [2] M. McCauley, The Khrushchev Era 1953-1964 (London: Longmam, 1995), p.42. [3] N. Khrushchev, The Cult of the Individual - Part 1 (1956) [4] McCauley, The Khrushchev Era, pp. 43-44. [5] As quoted in D. Nordlander, 'Khrushchev's Image in the Light of Glasnost and Perestroika', The Russian Review, Vol. 52, No. 2 (1993), p. 252. [6] Condee, Cultural Codes of the Thaw, p. 163. [7] P. Carlson, K Blows Top: A Cold War Comic Interlude, Starring Nikita Khrushchev, America's Most Unlikely Tourist. (New York: PublicAffairs, 2010), p. 26. [8] Larson, Dismantling the Cults of Stalin and Khrushchev, p. 384. [9] Ibid. p. 385. [10] Ibid. p. 358. [11] S. E. Reid, 'In the Name of the People: The Manege Affiar Revisited', Kritika, Vol. 6, No. 4 (2005), p. 673. [12] Condee, Cultural Codes of the Thaw, p. 171. [13] G. Yelshevskaya, The Thaw and the 1960s: The Birth of the Underground (2016) [14] N. Khrushchev, Khrushchev on Modern Art (1962). [15] Ibid. [16] G. Shakhnazrov, 'Khrushchev and Gorbachev: A Russian View' in W. Taubman, S. Khrushchev & A. Gleason (eds.), Nikita Khrushchev (New Haven: Yale University Press, 2000), pp. 301-320. [17] Khrushchev, Khrushchev on Modern Art. [18] Shakhnazrov, Khrushchev and Gorbachev, p. 311. [19] Condee, Cultural Codes of the Thaw, p. 170. [20] Larson, Dismantling the Cults of Stalin and Khrushchev, p. 384. [21] W. L. Hixson, Parting the Curtain: Propaganda, Culture, and the Cold War, 1945-1961 (London: Palgrave Macmillan, 1998). [22] 'This week in history: Soviet leader Khrushchev visits the United States', Deseret News (2015) [23] Quoted in L. J. Nelson & M. G. Schoenbachler, Nikita Khrushchev's Journey into America (Lawrence: Kansas, 2019), pp. 4-5. [24] Quoted in ibid. p. 2. [25] Ibid. p. 3. [26] R. L. Garthoff, Soviet Leaders and Intelligence: Assessing the American Adversary during the Cold War (Washington DC: Georgetown University Press, 2015), p. 24. [27] Ibid. p. 27. [28] Hixson, Parting the Curtain, p. 349. [29] S. E. Reid, 'Cold War in the Kitchen: Gender and the De-Stalinization of Consumer Taste in the Soviet Union under Khrushchev', Slavic Review, Vol. 61, No. 2 (2002), p. 233. [30] R. Nixon & N. Khrushchev, 'The "Kitchen Debate" (July 24, 1959)' in: R. Perlstein (ed.), Richard Nixon: Speeches, Writings, Documents (Princeton: Princeton University Press, 2008), pp. 88-96.
- Between Control and Compromise: The Establishment of Spain’s American Empire
The Spanish Empire at its height spanned most of the American continent, from California to the southernmost reaches of Chile, and contained millions of inhabitants of incredible ethnic diversity: white Spaniards, the criollos, mestizos, mulatos, zambos, and, of course, the native Amerindians; a vast colonial empire established in the course of three centuries. Newer scholarship[1] has attempted to explain the above success of Spanish empire formation as the result of a continuous process of ‘consensus building’, being the cooperation and constant negotiation of power, loyalty, and mutual benefits between the imperial metropolis in Madrid and the elite which emerged in the American localities, hereafter referred to as the creoles. In this way, conflict was largely avoided and given a more productive expression in the various councils the locals used for the communication of their grievances to the Spanish king, such as the audiencias and the cabildos, with outbursts of violence being few and far between, and always seen as the last resort.[2] That is, the same historiography claims, until the supposed watershed moment of the 18th century, when the Bourbon dynasty ascended to the Spanish throne and instituted a series of absolutist reforms which sought to undo the previous centuries’ consensus building, and bring the Americas under much stricter royal control, largely removing the elites from the imperial equation.[3] Yet, the story of the conflict between the Spanish centre and the periphery, be it overseas or in the Iberian Peninsula, is a story as old as the Spanish Crown itself, with the Catholic monarchs, Isabella I and Fernando II, having found themselves struggling to keep the disparate peninsular domains together, and the subsequent Spanish rulers trying to consolidate their authority and bring their realm closer together.[4] The historical reality is, as demonstrated above, not as straightforward as a story of two centuries of consensus (16th and 17th) versus one of absolutism (18th), and this article’s objective will be to examine the extent to which the above narrative is accurate, and the points where it falls short of presenting a complete image of the political status quo in Spanish America. In doing so, the analysis will assume a mainly temporal dimension, moving through the Conquest and establishment in the 16th century to the consolidation of the 17th and the Bourbon Reforms of the 18th, without suggesting that the centuries were definite markers of change, only points of reference. At the same time, in recognising the vastness of the empire and the varying experiences of its parts, as well as the fact that reform in the Spanish Empire was never implemented universally,[5] I will frequently be focusing on specific areas as case studies, such as New Spain and New Granada. In the end, as I will demonstrate, the conclusion emerges that the Spanish Empire in the Americas was constructed through consensus which was not universally applied or to the same degree, and that the Bourbon Reforms of the 18th century, though certainly successful in some aspects, such as the restructuring of the colonial military and the reassertion of peninsular dominance, were the latest in a series of attempts by the Crown to test the limits of its authority in the Americas over the centuries. The Bourbon Reforms were simply the most blatant and comprehensive of the latter, and caused intense reactions in most American possessions, eventually forcing the Crown to negotiate consensus once more, albeit on a better footing than the preceding Habsburg era. To begin my examination, I will have to focus on the establishment of the Empire in the Americas, largely in the 16thcentury, kickstarted by Columbus’ occupation of the Caribbean islands, but exponentially heightened with the landing of Cortés on the mainland in 1519 and the Conquest of Mexico (1519-1521). This was, undeniably, a conquest, and the violent eruption of the Spanish conquistadors in the world of the Amerindians does not, at first sight, summon notions of ‘consensus building’. Yet, in what would become a main theme of Spanish interactions, first with the natives and then with the creoles of the New World, the conquistadors of Cortés, dramatically outnumbered, and with only marginal technological superiority, desperately required the cooperation of the locals to succeed.[6] Therefore, contrary to the traditional image of the domineering conquerors, the conquistadors immediately sought the assistance of the Nahua tribes who were suffering under the yoke of the Aztec Empire which Cortés wanted to subdue in the name of the Crown. They did so by offering them honours, titles, gold, and privileges, such as the right to ride a horse or wear Spanish clothes and bear Spanish arms, or, later on, conquered land, exemptions from having to pay tribute or from the harsh encomiendaservice.[7] Cortés constantly emphasised in his men, the need for good behaviour towards friendly natives, and the importance of formal agreements over looting.[8] The Spaniards depended on the natives for logistical, material, and military support, and the natives saw in the newly arrived foreigners ways to benefit from the upset local balances, thus creating a proto-consensus based on mutual profit. This understanding was nothing new for either party involved. For the conquistadors, fresh from the fires of the Reconquista, this was standard conquest practice; the co-opting of local elites with the promise of privileges, lands, and titles in order to forge alliances which would then facilitate the conquest, in what Oudijk and Restall coin ‘stepping-stone pattern’.[9] Accordingly, for the natives of Mexico and Mesoamerica, a characteristic aspect of the precolonial period was the division of land by the local potentates among their captains who formed an allied elite based on ethnic or marriage ties, or bonds of alliance; this was the way the native elite enlarged their control of land and privileges, and a method widely applied during the expansion of the Aztec Empire in the previous centuries.[10] Therefore, though divided by language and culture, the Spaniards and natives formed a consensus based on common experiences of expansion, in the name of opportunistic gain. Where consensus was not employed, disaster soon followed, as with the expedition of Nuño de Guzmán into New Galicia in 1529, whose infamously harsh treatment of the natives led to non-cooperation and desertion of native allies, leaving the expedition decimated, doomed to fail as attrition and local resistance soared.[11] In contrast, Antonio de Mendoza, New Spain’s first viceroy, just a decade afterwards, campaigned in the same region, but after having treated his native allies with respect and recognition of their status, and was met with much greater success, as native assistance proved crucial.[12] The Tlaxcalans provide a model for what happened with many other tribes; first facing the Spaniards on the battlefield – non-consensus – and then, due to their common enmity towards the Aztecs, concluding an alliance with Cortés – consensus – proving to be the most important native ally of the Spanish in the conquest of Mexico.[13] Cortés had made substantial promises to them, which the Tlaxcalans, as did other natives, recorded and remembered for long after the conquest. Beyond the short-term opportunistic coalition of the early conquest, Tlaxcala in the ensuing decades, referring to its alliance with the Spaniards, sought to enhance its position within the now firmly established realm of New Spain, by negotiating privileges as late as the 1560s, nearly half a century after the Conquest, such as an exemption of all Tlaxcalans from the tribute all other natives had to pay, to protect its members from the encomiendas, and to prevent Spanish settlement within its lands.[14] In response to Spanish requests for use of the natives for hard labour, in 1532, Queen Isabella issued an edict, exempting the Tlaxcalans and other native allies from the payment of tribute and forced labour, recognising them as valued vassals of Spain, while simultaneously requiring them to aid in reconstruction projects of their own ‘goodwill’, effectively forcing them into negotiations with the same conquistadors who wanted to use them as slaves, who would now have to reach a consensus if they were to procure native assistance.[15] A new native elite had been created, one which confidently asserted its rights until well into the 17th century, when it had gradually become clear that once the Spaniards no longer required native assistance, with the era of conquests being largely over. The Hispano-Amerindian consensus was broken by the conquerors themselves who ignored Amerindian petitions and subjected most of them to the tribute-paying system in place.[16] Regardless, Spanish presence in Mexico had only been established due to the process of consensus-building between the conquistadors and the locals, no matter how much the Spaniards prided themselves as having been the sole conquerors and inheritors of the new lands. Beyond the Amerindians, there was, however, another elite, created in the fires of the conquest and reinforced in the years afterwards, that would cling onto the idea of consensus much more actively than the disillusioned natives of Mexico, and this was the conquistadors and the Spanish settlers after them, who regarded themselves as much a conquering race as the warriors who had preceded them. Having come from Spain, the conquistadors firmly believed in the idea of reward for valour in battle. In the glory days of the Reconquista, Christian warriors who had distinguished themselves on the battlefield, received titles and honours by their sovereign, and became part of the noble elite of the lands they had helped conquer.[17] The conquistadors of the Americas believed that their feat had been no less great than the Reconquista, and that they deserved to be repaid in full for their service, particularly as many of them, Cortés included, had incurred substantial debts to pay for their expeditions in the first place.[18] As a result of their private enterprise, they emerged as elites from the conquest, most of them rich with Amerindian gold, land, and slaves in the form of the encomiendas, all of which they used to consolidate their position in the New World. Many of them little better than adventurers, with no hopes of a breakthrough into the Old World Spanish nobility, desired to become great men in their own right, in the Americas.[19] In the beginning of the Conquest, their wishes were fulfilled as they became gobernadores and adelantados in the absence of any other authority, but soon enough they were able to channel their influence through the newly established audiencias,[20] as powerful encomenderos. Before long, a Viceroy had been appointed in New Spain, Antonio de Mendoza, who was meant to consult the audiencia first, before making any decision which would affect the newly formed elite, in what could be interpreted as either the beginning of consensus building in the form of the constant negotiations of the various institutions of the elites in the Americas with the peninsular appointees of the Crown. As for the conquistadors themselves, they remained firm believers in the doctrines that characterised Spanish – Castilian – rulership since its conception, the idea of a contractual relationship between the monarch and his vassals, whereby both were part of one body, the corpus mysticum, which, if it were to operate in a healthy manner, it needed to do so based on consensus.[21] Inspired by the Siete Partidas of Alfonso X, the conquistadors held that for as long as the King did not fall into tyranny and ruled on the basis of natural and divine law, they would be his obedient and loyal subjects.[22] These were notions that they brought into the first viceroyalties established in the Americas, those of New Spain and Peru, and they would become intrinsic in how the creole elite which eventually emerged, would understand its relationship with the Crown across the Atlantic, as one built on consensus and negotiation. When this consensus was not respected, protest and a violent reaction were soon to follow. In the case of the conquistadors, when King Carlos V, influenced by Bartolomé de las Casas and his work on the maltreatment of the Amerindians, passed the New Laws, parts of which essentially stripped the conquistadors from their main source of income, the encomienda, the reactions were instantly negative. In Peru, the encomenderos, led by Gonzalo Pizarro, rose up in arms against the Crown’s representatives, slaughtering the Viceroy and denouncing the Crown’s decision as a tyrannical one, in direct breach of the contractual relationship between monarch and subjects.[23] The regime in Madrid, in another characteristic response in the face of fierce colonial reaction to its more interventionist policies, replied with offers of amnesty and concessions with regards to the application of the New Laws, granting the rebels all that they asked, and an outlet for their grievances through the institutional framework still in development.[24] Pizarro refused and moved towards the establishment of an independent kingdom in Peru, and thus, his hubris lay not in rebelling against the Crown’s policies, something which could be smoothed over by institutional consensus, but in seeking to remove himself from the institutional framework entirely. His men having deserted him, he was eliminated by loyalist forces. In New Spain, on the other hand, the astute Mendoza, applied what would come to define consensus-rule in the Americas, the doctrine of ‘se obedece pero no se cumple’, recognising the validity of the royal command but refusing to execute it until further negotiations with the local elite were conducted and the decision appealed, thus avoiding violence entirely.[25] Thereafter, this was the way the empire would be consolidated in the Americas, by allowing local elites to appeal the decisions of the Crown through the cabildos and the audiencias, negotiating which concessions each side would grant the other, and thus creating an arena for effective conflict resolution. The creole elite’s reaction to unpopular policies and royal servants, would be mitigated by the staunch belief that across the ocean awaited a king who would listen to their grievances and respond accordingly, and the king in Madrid was content in seeing his authority recognised through the periphery’s appeals to his person. Any resistance to royal policy would take place within a universally understood institutional framework and Pizarro’s rebellion would be seen as the exception, not the rule. Until the third decade of the 17th century, this balance was maintained fairly well as the empire was being expanded and consolidated throughout the rest of Latin America, and the upper echelons of the increasingly mixed population coalesced into the creole elite one comes to recognise in the mid- to late periods of the Spanish Empire. As elites were and are wont to do, they sought to expand their powerbase and wealth, as well as the limits of their influence, from New Spain to the north to New Granada and Peru to the south, but their efforts had hitherto been confined to the municipal level, as public offices were as of yet out of their reach.[26] Unable to access regional power in any other way, the urban families of New Spain, New Granada, and Peru, engaged in extensive intermarriage as well as marriage with the peninsulares who came to fill in important posts in the colonial administration from Spain, creating a vast network of interconnected families of elites across Spanish America which furthered the consolidation of the empire through consensus, this time one based on marriage.[27] At the same time, the various religious orders, such as the Jesuits, were also creating a powerbase of their own in the Americas through a vast administrative and clerical apparatus built around the cause of the conversion of the natives. The communities of new converts they created were based on the already existing Amerindian ways of communal organisation, which also possessed important elements of consensus building.[28] From the mestizos to the Amerindians to the white peninsulares, the consensus-based framework of the Spanish Empire in America worked to incorporate all as it consolidated itself. This balance, however, was not to last. Increasingly, from the late 16th century onwards, the Habsburg monarchy in Madrid faced mounting financial difficulties and began putting public offices up for sale, beginning with notarial posts leading up to all local offices. As the 1630s wore on, Felipe IV began selling treasury offices as well to the wealthy creoles who could afford them, and, with the situation intensely exacerbated by the disastrous Thirty Years War, the sales would only grow more frequent and more comprehensive. Finally, in 1687 posts within the audiencias themselves were put up for sale, the last line of peninsular defence before creole ambition.[29] The flood gates had been opened, and the elites gradually infiltrated almost the entirety of the Spanish imperial apparatus in the Americas, outside the viceregal posts. Fraud and corruption soared as the criterion for office was now wealth, not competence. Yet, the most important consequence of the 17th century’s gradual loss of control was the fact that whereas before, consensus had been used to establish and consolidate the empire in the Americas, it was now arguably used to dismantle it. This did not mean that the creole elite sought to distance itself from Spain through its actions but rather, concerningly, that it did not seem to care about whether its actions brought about that result or not. The elites, using the networks of consensus they had already established in the preceding decades, strove to increase their wealth and influence by keeping as much of the colonial capital in America, and by defending local creole interests before the demands of the central authorities in Madrid. There was never outright challenging of Spanish government, but control over the colonies was slipping away all the same. Thus, towards the end of the 17th century, a powerful creole elite had emerged which the Crown increasingly needed to placate in order to achieve good and effective government, to the extent that this was possible, in the Americas. It was a different kind of consensus, but consensus nonetheless. Yet, as a counter-argument to the idea of decline of central power, one must remember, as Cristopher Storrs correctly points out, that while, due to elite empowerment, more wealth remained in America, such as in the case of New Granada which only exported gold,[30] it was the Crown which decided how it was to be spent, the Crown which set the priorities of colonial administration through the peninsular viceroys it appointed, and the Crown to whom the elites still looked for validation and accumulation of honours.[31] As he suggests, perhaps optimistically, though this was not a powerful interventionist monarchy, its softer touch approach meant that the colonies and the elites which represented them were happy to remain within the system of consensus which the increasingly decentralised empire provided, rather than outside it, even providing private militias for its defence. Even in those territories which were violently wrested away from it, such as those to Louis XIV of France, creole loyalty staunchly remained to the Spanish Crown which had allowed them to grow so powerful and prosperous.[32] In further contradiction of Lynch’s and McFarlane’s arguments about 17th century decline of imperial authority, Alejandro Cañeque suggests that the process was not the same in all parts of the Spanish Empire, and that in New Spain for example, the viceroys retained a very active role over the inner workings of the cabildo.[33] Moreover, he holds, the sale of offices such as that of the oidor was quite rare, at least in New Spain, and was met with resistance from both viceroy and other oidores, while the New Spanish cabildo, even at the height of its power, lacked sufficient members because the local elites were not interested in purchasing the regidores positions, as the viceroy assured the king in 1693.[34] The accountability that came with the posts, which more often than not led to the imprisonment at the slightest sign of fraud, had made them undesirable to the local Mexican elite. In the end, the 17th century process of ‘decline’ was not one of increasing decentralisation, but an ongoing struggle across the empire between various groups of creole and peninsular elites over the nature of the monarchy. Whether the king’s authority stemmed from both his person and the consensus he held with his subjects, as the ‘constitutionalists’ maintained, hearkening back to the conquistadors’ belief in the corpus mysticum, or whether the subject elites were only there to advise the king who would make the final decision on all matters, as the ‘absolutists’ thought.[35] The former acted as if their proposition was true, seeking to acquire the gravitas they believed justly belonged to them, whereas the latter sought to limit them from doing that very thing. It was the latter who seemed to triumph as the dust settled after the War of the Spanish Succession and the new Bourbon dynasty ascended the throne in the face of Felipe V. Having witnessed first-hand the destructive factionalism between the various provincial elites hostile to the new French dynasty, which had turned peninsular Spain into a battleground, the Bourbons’ administration was characterised by a deep distrust of the creole elites in the Americas. Imbued with a regalist and centralist agenda, emulating the absolutist tendencies of the other monarchies of Europe, and with a deep sense of imperial decline and corruption, the Bourbon kings immediately sought to bring about drastic change in the administration of the Americas, more actively than any dynasty before them.[36] Their goals centred around the minimisation of local power and the closer integration of the American elites; in short, the bringing back into the fold of the overseas empire, after a century of seemingly loosening authority. Change did not come immediately. In 1750, under Fernando VI, it was decreed that all sales of offices in the Americas were to end, and the new appointees would be from educated and most importantly loyal peninsulares[37] who would be far removed from local interests, thus placing the interest of the state above that of the individual. General inspectors were sent in the Americas, starting with Cuba in 1764, in the form of the intentados and the intendencias in order to remedy the fragmentation of authority and replace the widely regarded as corrupt corregidores, in their majority creoles.[38] Gradually, the reforms proved to be successful in wresting back power within the assemblies from the creoles to the peninsulares by the late 18th century. Though increasingly anxious, the creole elite put their faith in the old system of consensus and sought to satisfy the Crown’s new demands for a peninsular-educated professional class of administrators, by sending their sons to study in Spain,[39] and also, employing the method they knew all too well, by arranging marriages with newly arrived peninsular officers, as another channel into power.[40] The Crown attempted to curtail this by restricting marriages from 1776 onwards, but after witnessing the North American Revolution, it assumed yet again a more consensus-based approach, granting the creoles what they wanted, in the form of honours such as the distinction of the Orden de Carlos III, or other titles of nobility. Unsurprisingly, the provinces where most titles had been conceded were also the most loyal ones.[41] Most crucially, it seemed as if the rhetoric in Madrid had changed, moving entirely away from consensus and into absolutism, and once the creole elites realised this, much like the conquistadors of earlier centuries, they reacted very negatively, first with petitions that went unanswered and then with armed insurrection. Characteristic were the 1765 Quito riots, which assumed a distinctly anti-peninsular character, though never an independentist one, the 1780 Túpac Amaru rebellion in Peru, and the 1781 Comuneros Revolt in New Granada, all in response to the increasing demands of the central administration in the form of trade monopolies and rising taxes, as well as the exclusion of the creole elite from positions of authority in their own patrias.[42] The Túpac Amaru rebellion served as a unifying force for the forces of the Crown and those of the creoles, due to its mainly Amerindian character, and highlighted the advantages of effective cooperation between centre and periphery, as the rebels were mainly put down by colonial regiments.[43] More importantly, with the New Granadan Comuneros, the local expression of the Bourbon regime, in the face of the peninsular-manned audiencia, was forced into an albeit counterfeit compromise. The rebel creoles who had so much faith in the consensus system that they simply accepted the guarantees that their demands would be respected by the local audiencia and dispersed peacefully en masse.[44] Beyond localised dissent, however, the main cause was that the unwritten ‘constitution’ between Crown and creoles had been broken, and thus armed revolt was their only option, in the absence of institutions where such grievances could reach the king. These rebellions, though largely isolated in character, caused great anxiety in the Spanish administration and led to a reconsideration of policy, whereby most reforms were retracted, as early as 1765, in the reform process, after the revolt in Quito. Consensus seemed indeed to be able to make or break the Spanish Empire in the Americas. It was particularly in the military context that this became more obvious, as the Bourbons sought to reform the defence of their colonies against British incursions. With capital unavailable to fund professional local militias, the Crown was once against forced to turn to the creole elites, particularly in Cuba, New Granada and Peru, enticing them with titles, honours and commercial privileges which would cover the cost of the maintenance of the new regiments.[45] Once the familiar consensus-structure had been re-established, at least in the military arena, the creoles were all too happy to oblige, arming and staffing effective colonial units, making up for more than 70% of colonial military officers by the end of the 18th century.[46] In conclusion, having examined the early period of the Spanish Empire in the Americas, it becomes clear that it would have been established at a much slower and more reduced rate without the consensus-building which took place between the conquistadors and the Amerindian elite they encountered. Moreover, in its subsequent consolidation in the 17thcentury, the dynamics of the earlier conquest were thoroughly solidified, as between Spanish authorities and the American locals, thereafter principally the creoles, there existed a substantial margin for mutual benefit which served as a basis upon which to build long-lasting and more intimate bonds, while nevertheless always trying to increase their authority at the expense of the other. Finally, with regards to the impact of the Bourbon Reforms, one can neither define it as minimal, as peninsular dominance was undoubtedly re-established in the audiencias, nor as substantial, as, despite the absolutist rhetoric laden with Enlightenment political philosophy, the reforms displayed an important degree of continuity with earlier attempts at similar changes, such as the New Laws of Carlos V aimed at cutting conquistador ambitions short, and, in the end, led to new negotiations of consensus after having tested yet again the limits of the Crown’s authority over its American territories. Xenofon Kalogeropoulos is about to commence a DPhil in Ancient History at the University of Oxford (St. Anne's College) having graduated from the London School of Economics and Political Science with an MSc in Empires, Colonialism and Globalisation. Notes: [1] John Lynch, The Institutional Framework of Spanish America, in Journal of Latin American studies, 1992-03, Vol.24 (S1), p.69-81 (Cambridge: Cambridge University Press, 1992), p. 69. [2] John H. Elliott, Empires of the Atlantic World: Britain and Spain in America, 1492-1830 (New Haven: Yale University Press, 2006), p. 131. [3] Mónica Ricketts, Who Should Rule? Men of Arms, the Republic of Letters, and the Fall of the Spanish Empire (New York: Oxford University Press, 2017), p. 10. [4] Ibid., p. 11. [5] Allan J. Kuethe, The Early Reforms of Charles III in the Viceroyalty of New Granada (1759-1776), in John R. Fisher, Allan J. Kuethe, Anthony McFarlane (eds.), Reform and Insurrection in Bourbon New Granada and Peru (Baton Rouge: Lousiana State University Press, 1990), p. 28. [6] Oudijk, Matthew Restall, Mesoamerican Conquistadors in the 16th Century, in Laura Matthew, Michel Oudijk (eds.), Indian Conquistadors: Indigenous Allies in the Conquest of Mesoamerica (Norman: University of Oklahoma Press, 2007), p. 38. [7] Susan Schroeder, The Genre of Conquest Studies, in Matthew, Oudijk (eds.), Indian Conquistadors, p. 17. [8] Ibid. [9] Ibid., p. 43. [10] Ibid., p. 56. [11] Ida Altman, Conquest, Coercion and Collaboration: Indian Allies and the Campaigns in Nueva Galicia, in Matthew, Oudijk (eds.), Indian Conquistadors, pp. 152-155. [12] Ibid., p. 147. [13] Oudijk, Matthew Restall, Mesoamerican Conquistadors in the 16th Century, in Matthew, Oudijk (eds.), Indian Conquistadors, pp. 46-47. [14] Ibid., p. 21. [15] Matthew, Whose Conquest? Nahua, Zapoteca, and Mixteca Allies in the Conquest of Central America, in Matthew, Oudijk (eds.), Indian Conquistadors, p. 112. [16] Ibid., p. 114. [17] Silvio Zavala, New Viewpoints on the Spanish Colonization of America, (New York: Russel & Russel, 1968), p. 70. [18] Ibid., p. 69. [19] John Lynch, The Institutional Framework of Spanish America, p. 71. [20] Bernard Moses, The Establishment of Spanish Rule in America: An Introduction to the history and politics of Spanish America (New York: Cooper Square Publishers, 1965), p. 69. [21] John H. Elliott, Empires of the Atlantic World, p. 131. [22] Ibid. [23] Ibid., p. 133. [24] Ibid. [25] Moses, The Establishment of Spanish Rule in America, pp 100-103. [26] John H. Elliott, Empires of the Atlantic World, p. 145. [27] Ibid., p. 175. [28] Silvio Zavala, New Viewpoints, p. 108. [29] John H. Elliott, Empires of the Atlantic World, p. 175. [30] Anthony McFarlane, Alan Knight, Colombia before Independence: Economy, Society, and Politics under Bourbon Rule (Cambridge: Cambridge University Press, 2009), p. 3. [31] Cristopher Storrs, The Resilience of the Spanish Monarchy: 1665-1700, (Oxford: Oxford University Press, 2006), pp. 228-229. [32] Ibid., p. 229. [33] Alejandro Cañeque, The King’s Living Image: The Culture and Politics of Viceregal Power in Colonial Mexico (London: Taylor & Francis, 2013), p. 74. [34] Ibid., p. 75. [35] Ibid. [36] Ricketts, Who Should Rule?, p 10. [37] Ibid., p. 19. [38] Ibid. [39] Ibid., p. 46. [40] Ibid., p. 20. [41] Ibid. [42] John Lynch, The Institutional Framework of Spanish America, p. 81. [43] John H. Elliott, Empires of the Atlantic World, pp. 355-57. [44] McFarlane, Knight, Colombia before Independence, p. 4. [45] Kuethe, The Early Reforms of Charles III, in Fisher, Kuethe, McFarlane (eds.), Reform and Insurrection, p. 28. [46] Juan M. Fernandez, The Social World of the Military in Peru and New Granada, in Fisher, Kuethe, McFarlane (eds.), Reform and Insurrection, p. 57.
- Review: Elizabeth Kiddy's Blacks of the Rosary: Memory and History in Minas Gerais, Brazil
Since the initial establishment of the Church of Our Lady of the Rosary in Brazil as a means of unifying and recreating ethnic identities and communities in the African diaspora, throughout three centuries of social, political and ecclesiastical transitions, the brotherhoods of Minas Gerais have undertaken a turbulent metamorphosis, and while ritualism and tradition are maintained at the core of these organisations, it is their plasticity in the face of ubiquitous change that best characterises their odyssey into the modern age. In her book: ‘Blacks of the Rosary, Memory and History in Minas Gerais, Brazil.’ Elizabeth Kiddy masterfully chronicles this journey, intermingling archival documentation that provides an insight into the ethnic and hierarchical composition of the brotherhoods, with extracts and interviews of prominent members, that shed light on their cultural significance in today’s Brazil. Kiddy examines and challenges some of the assumptions and preconceptions assigned to the brotherhoods and delivers a finely nuanced overview of both their temporal and spatial existence. Moreover, Kiddy asserts that the survival of these lay organisations is an attestation to their resilience in navigating a continuous interplay of legislative and authoritative obstacles, transforming and rebranding their external relationship with Church and state officials, while cultivating their link to a collective memory and ‘cosmology’[1]. Finally, the author identifies the transience of the brotherhoods and the congadeiros through an evaluation of devotion and its manifestation in these communities, skilfully connecting Afro-Brazilians in colonial Minas Gerais to their urban descendants. To identify a singular argument or conclusion from Kiddy’s work is no easily accomplished feat. Within this book is found an extremely broad spectrum of anthropological and sociological components, divided chronologically and geographically in three parts: the antecedents of the mineiro both in Europe and in Africa, the timeline of the brotherhoods in Minas Gerais, from the arrival of the first Europeans and Africans until the end of slavery in 1888, and later the ongoing methods of the congadeiros and the brotherhoods in the twentieth century. These contribute to an incredibly rounded and thorough analysis of the history of not only the brotherhoods of Minas Gerais, but equally the origins of the Rosary, the formation of European lay organisations and even the trajectory of the early colonial period. However, the ideas introduced in the introduction of this review; the nuanced position of the brotherhoods in previously simplified ideas of resistance and accommodation, the survival of these organisations, and the central theme of devotion, will be considered the main theses embodied in the text. Kiddy examines all these factors and bases her theses on three pillars of research; archival documents from Minas Gerais that highlight the changes in demography and infrastructure, anecdotal and qualitative examples from current devotees of the brotherhoods, and two case studies of modern brotherhoods in Oliveira and Jatobá. To elaborate on the first of these central focal points, the author identifies some of the misconceptions or contradictions often found in more ‘traditional’ scholarship, namely the long-held belief that the brotherhoods were ‘slave organisations’ despite most of their membership being free men and women, brought together by affective ties rather than shared legal status.[2] Furthermore, Kiddy also highlights the limitations of the resistance/accommodation model that is often attributed to these communities, exposing some of the ways in which these organisations were, and continue to be synonymous with both assimilation and defiance. For example, from the flat-out refusal to pay certain taxes to the state and expression of defiant autonomy in the writing of their compromissos during the late colonial period, to the resistance of Pombaline and later Ultramontane reforms that attempted to centralise ecclesiastic control, and finally the creation of the Associação dos Congadeiros de Minas Gerais in the twentieth century that gave the congadeiros a somewhat representative political platform. [3] [4] In doing so, Kiddy exposes the many avenues that exemplify the obstreperousness of these communities and the shortcomings of the resistance/accommodation model as an insufficient absolutist perspective. Another common generalisation found in not only some scholarship, but also wider collective memory and occidental thought, is that the transatlantic slave trade utterly and unmitigatedly erased any trace of African culture remaining in its victims, and while the barbarity of the middle passage did diminish familial and geographic ties to the culture of the African, it would be irresponsible to assume that they retained no collective link to their ancestral homeland or culture. Kiddy defends this retention of collective memory well, challenging the eurocentric lens through which social categorisation is often viewed, and champions the brotherhoods as a vehicle for the reorganisation of disparate groups and the laying of foundations for new cultural and ethnic identities built on the commonalities of ancestry, regardless of how contradictory it may appear to western historical discourse.[5] Kiddy’s identification and contradiction of these preconceived notions are a consistent and intrinsic element of her research that are developed throughout this text. Additionally, an element that underpins the whole book is the pervasive question of how these organisations have managed to survive through so much adversity and change, a question that Kiddy answers definitively throughout. The way this survival is presented is through an examination of the challenges faced by the brotherhoods and their responses to said challenges. These obstacles ranged from economic, such as the general decline in the economy of the captaincy that affected membership numbers in the latter half of the eighteenth century, legislative, for example reforms that favoured centralisation of either the state or the Vatican, or even social and political, as seen in the nineteenth and twentieth centuries, in which is evident a departure from these types of organisations in favour of secularisation and republicanism. Kiddy posits that the successes of the brotherhoods lie in various factors: the aforementioned manifestations of resistance carried out by the brotherhoods over time both by individual parishes’ efforts to secure a future, and by structural and demographic diversity that has syncretically intertwined the brotherhoods with the cultural values of Brazil. Overall, it is the heterogeneity and flexibility found in the organisations (that is not to say there were no limitations on this, and hierarchies were often established along lines of colour in their membership) that have allowed them to endure governmental upheaval, religious reformation and social transition. Throughout the text, Kiddy draws on the integral theme of devotion to focalise the arguments surrounding the brotherhoods’ longevity and socio-cultural importance. The significance of this devotion is a concept that is specifically and consistently evidenced throughout the book. Kiddy charts the development of manifestations of said devotion on several occasions, contextualising each in wider frameworks of religious expression, both African and European. Kiddy demonstrates how devotion to Our Lady and the annual festival were the ‘two pillars on which the concept of being black in the brotherhoods was erected’ and highlights the significance of each of these declarations of faith.[6] The first of these pillars then, is exemplified in the adoption of Our Lady by Afro-Brazilian communities as a patron saint of the black population, be it through the legend of Our Lady appearing on the water and only complying to the entire spectrum of black peoples together (a story that is told various times throughout), to the belief that true rosary beads can only be made from the plant As lágrimas da nossa senhora and its origins, Kiddy demonstrates the ways in which this European figure has been instrumental in reconstructing ethnic communities.[7] Furthermore, the exposition of the congado as an intrinsic component of these brotherhoods’ structure underlies much of Kiddy’s work, particularly in the third section, in chapter seven: Voices of the Congadeiros wherein first-hand accounts of prominent members and other congadeiros are recounted.[8] Many of these accounts hinge upon the significance of the festival and the great supplications granted to the most devoted members of health, prosperity and peace.[9]Finally, these stories exhibit the sustained ritualism of the communities and a belief in ‘magic’, evident in the reverence shown to the staff of the congado royalty10that alludes to the overlap of the brotherhoods’ connections to both African and European religious tradition.[10] Overall, Kiddy ties together these two concepts and posits that they are the instruments and cultural materials with which these communities have sought to ensure their survival as a community, maintain their devotions and the link to their ancestors as well as foster a pride in their African roots. In conclusion, in producing this text, Kiddy has cogently delivered not only an incredible chronology of a multifaceted and diverse community, but equally represented the brotherhoods’ ambitions, motivations and self-perception over the course of three centuries, in what is a thought-provoking challenge to some of the misconceptions of this branch of society and the wider African diaspora. Through her study of these transient and fluid organisations, Kiddy has demonstrated the power of the members of these brotherhoods, while exposing the limitations of eurocentric interpretations of religious and cultural separation, and proposing that religion, or rather, cosmology, for these people was not only an aspect of their life and shared culture, but a continuous connection to shared social memory, ancestry and ethnic identity. A connection that has sustained their self-preservation regardless of the countless challenges they have faced. Ross Hardy graduated with a BA in Hispanic Studies from the University of Nottingham in 2021 (this review was written during his studies). Notes: [1] Elizabeth Kiddy, Blacks of the Rosary: Memory and History in Minas Gerais, Brazil (Pennsylvania: Penn State University Press, 2009) p. 40. [2] p. 4. [3] p. 180. [4] p. 112. [5] p. 41. [6] p. 10. [7] p. 212. [8] p. 208. [9] p. 226. [10] p. 231.
- Charles of Anjou: a success or failure?
Charles of Anjou is a controversial figure in Medieval history. Remembered for the Sicilian Vespers in 1282 and the subsequent loss of Sicily, it has been easy for his life to be portrayed as a complete failure. The truth, however, is more nuanced than this. By considering his rule from both a papal and a dynastic perspective, it is clear he had many successes and the Sicilian Vespers, arguably his biggest failure, was largely the result of inheriting centuries of tough government rather than his sole fault. Although the loss of Sicily meant he appeared to have failed in his endeavours, looking more widely it is clear his reign allowed for a successful papacy and the solidification of his Capetian dynasty. A large number of Charles’ failures were inherited rather than created. His hamartia was his prioritising of ambition over approval from his subjects. It is this that has led to his reputation as a failed king. To judge the extent of his success or failure, it must be established how these concepts are quantified and the extent to which he achieved each perspective’s goals. From a papal perspective, his goal was to remove the Staufen dynasty from the Regno and reassert the Papacy in the region. From a dynastic perspective, the aim was to expand power and secure strength for the future. Considering the works the likes of Dunbabin and Abulafia, together with contemporary chronicles and letters, it is clear Charles managed to achieve many of his goals and it is, therefore, inaccurate to describe him as a complete failure.[1] Within the established perimeters, Charles of Anjou was more of a success from a dynastic perspective than from a papal one. It must be remembered, however, it is largely in retrospect this is obvious, as many of his successes only became clear later. To measure the extent of Charles’ success from a papal perspective, the outcomes must be considered against the objectives. It is agreed widely across the historiography that, as Welsh argued, Charles was selected during the 1250s by the Papacy as ‘champion to expel the Staufens from the Regno di Sicilia’.[2] Dunbabin expands on this, arguing the Papacy wanted Charles to overthrow Manfred in order to re-establish papal overlordship, which had been their goal since the Second Council of Lyons.[3] Contemporary writings substantiate these claims. Chronicler, Matthew Paris, wrote that Manfred had been excommunicated ‘as an invader of the kingdom and favourer of the Saracens’ and that Pope Urban IV later ‘gave the kingdom of Sicily to the French king’s brother Charles [of Anjou], but on the condition that he should drive Manfred from the kingdom’.[4] Pope Urban IV, writing to Louis IX, said Charles’ campaign was ‘the means of which we hope, with the favour of the Lord, to liberate the church from her enemies who surround her’.[5] While both Paris, with his known intimacy with both the French and English courts, and the Pope himself clearly had vested interest in portraying Charles’ actions as pious and noble, that the papacy’s intention with Charles was to remove Manfred and re-establish proper religious practice is clear. This considered, the extent to which Charles can be described as a failure is limited. Following the Battle of Benevento in 1266 and the death of Manfred, Charles became the King of Sicily and established, what Dunbabin referred to as, ‘the Angevin kingdom, which lasted (though in a truncated form) till 1452’.[6] In doing so, he achieved his primary objective: to remove the Staufen dynasty from the region. They were never returned. When there was an uprising led by Staufen claimant Conradin, Charles managed to successfully put it down, winning the Battle of Tagliacozzo and having the heir beheaded in October 1268. Runciman argued harsh actions like this were ‘intolerable to the easy-going Italians’ as the boy was fifteen years old. However, it actually speaks to Charles’ commitment to his mission of Staufen removal and maintenance of power.[7] Lower pointed to the event’s importance, saying ‘Conradin’s death ended the Staufen push from the north’.[8] In this sense, despite some discontentment in the region, Charles was technically a success. When Sicily was eventually lost during the Sicilian Vespers in 1282, it was the Aragonese who took power, not the Staufen dynasty. From the Papal perspective, he had achieved his objective: to remove Manfred. It would be wrong, however, to assert he was a complete success. Though Abulafia acknowledges Charles saw himself as ‘God’s agent, sent to scourge the unfaithful’, he argues he also was ‘driven by an acute ambition for power’.[9] This meant he was a ruthless leader, contributing to the discontentment resulting in the Vespers. A Genoese poet wrote Charles was ‘greedy even when he was not a count and became doubly so as a king’.[10] Similarly, Pedro III’s chronicler noted the people ‘were greatly angered by the rules of Charles and borne upon heavily by him’.[11] While it must be acknowledged Pedro had married Manfred’s daughter so had a claim to the Sicilian throne so perhaps exaggerated the intensity of Charles’ cruelty , these writings together speak to his failings to maintain peace in Sicily, as was the papacy’s intention for him. His ambition led to him striving for higher office, despite the fact ‘the papacy had brought him into Italy as an anti-imperialist’.[12] Charles went against the original agreement with the Papacy to introduce a fairer system in the region. Dunbabin emphasises the significance of the costs of his ambition, arguing, ‘that Charles’ fiscal policies caused much misery cannot be in doubt’ and ‘if Charles had not been tempted into costly campaigns in 1280 and 1282, the history books might have been very different’.[13] The Subventio generalis, for example, had been transformed into a resented direct tax under Frederick II and maintained under Charles to fund his expeditions. Runciman suggested ‘Charles’ rule was able and efficient. It provided justice and some prosperity. But it was never popular’.[14] His organised administration made his taxation, particularly, harsh on the population and as a result resentment grew, despite the elements of continuity. If he had not been so ambitious and thus needing funds, he would have been able to keep the original agreement with the papacy and perhaps better achieved his objective in the region. All this considered, from a papal perspective, Charles of Anjou achieved his primary objective: to remove the Staufen dynasty and Manfred from the Regno. However, the eventual loss of Sicily constitutes a failure which cannot be denied. This does not reduce the entire exploit to a failure though. He still managed to re-establish papal strength even with the personal loss, as was the priority. The analysis of Charles of Anjou’s life from a dynastic perspective illuminates his success. Born into the Capetian Dynasty and brother of King Louis IX, a main objective of Charles would have been to expand their influence and secure both their religious and political strength. In many ways dynastic and personal aims and perspectives were somewhat interchangeable for Charles and his brothers. This can be seen through the brothers’ adoption of titles like ‘frère du roi de France’ (brother of the King of France) and ‘fils de roi de France’ (sons of the King of France), which Le Goff argued was ‘one of the most important signs of the simultaneous reinforcement of the ideas of the dynasty and the “nation”.’[15]That Charles, and his siblings, cared immensely about supporting one another and maintaining their dynasty’s historic power is clear, contrary to Runciman’s assertion he was a only man of honour according to ‘narrow and selfish lights’.[16] Charles’ actions evidence a clever tactician who acted to expand the dynasty. In 1250 Louis wrote in a letter intended for circulation in France: ‘we have decided to send back to France our very dear brothers, the counts of Poitiers and Anjou, that they may comfort our very dear mother and the whole kingdom’.[17] Despite the clear propaganda purposes of the letter as a whole, Charles’ mention in it shows trust was put in him, by Louis, to handle the unrest in France and quell questions about the implications that failure meant about divine attitudes to the Crusade. This speaks to his capabilities to handle dynastic issues. Here, Charles was able to successfully protect his familial power. Other incidences display similar success. For example, he displayed ability to navigate political tensions and maintain power through his, sometime, leniency towards Muslims. In August 1258, Charles had the Christian rebels killed but spared Muslim soldiers, knowing this would serve him in later crusades.[18] He did this, despite knowing he could be criticised, as he knew this would better support his dynasty’s influence. Baldwin proposed convincing argument relating to Charles’ exploits around the Mediterranean and the Holy Land, disputing the argument Pope Gregory X was a major influence, instead suggesting Charles had greater agency.[19] Charles was able to establish extensive influence in the region himself, without assistance from the Papacy. Expanding into this region widened the power of the Capetians. His ability to control the Holy Land through trading, such as when he withheld supplies in 1274-5 causing starvation, undoubtedly marks a success considering his dynastic objectives to expand power.[20] Charles successfully created the ideology of the dynasty that shaped future generations. Abuladia argued he stands out as ‘one of a small group of figures who helped remodel monarchy in the late thirteenth century’.[21] Dunbabin, similarly, suggests ‘there is at least one aspect of thinking about later medieval French kingship which was consciously formed by Charles of Anjou, and which came to have a profound effect on later generations: that of the beata stirps (saintly linage)’.[22] When organising the marriages of his son to Maria of Hungary and his daughter to Ladislaus IV of Hungary, as part of what Abuladia called his mission to spread the royal seed, Charles adapted their idea of a Hungarian dynasty of saintly kings.[23] He referred to King Stephen as ‘a valiant, strong prince, descended from a line of saints and great kings’.[24]Charles was, obviously, trying to flatter the King, given the context of arranging a marriage, but, this language marked the beginning him becoming ‘the spokesman of a distinctive theory about their [the Capetians] status and obligations which went on to have an impact on future generations’, as Dunbabin argued.[25] A clear example of this was Charles’ instrumental role in the canonization of his brother Louis. Despite his dying before it finally came into fruition in 1297, Charles’ contribution to the cause was central. He staged a procession following Louis’ death to publicly assert his holiness and in 1282, at the inquiry into his brother’s canonization, Charles said ‘the holy root produced saintly branches saintly branches, not just the saint king but also the count of Artois who was a glorious martyr and the count of Poitiers, a martyr by intention’, expanding the tree of brothers.[26] Charles used this and other writings to the Popes of his lifetime to emphasise the greatness and holiness of the Capetian dynasty. Louis’ canonization increased the family’s prestige and gave future generations privilege by virtue of their ability to emphasise their link to Saint Louis in their titles. Here, Charles was a success, partially in retrospect, as he secured the status of his dynasty for future generations. To conclude, some key questions must be addressed including from which perspective can Charles of Anjou be seen as most successful? Despite Runciman’s harsh judgement ‘he failed as a man’, it is clear he was not a complete failure.[27]While it is true he made mistakes, some serious, like his handling of the Sicilian Vespers in 1282, there was an overarching element of success to his career. From a papal perspective, Charles’ objective, in simple terms, was to remove Manfred and the Staufen dynasty from the Regno. He achieved this, and, while the eventual loss of Sicily presents some level of failure, the Staufens never regained total control of the region. Though a lot of the causes of discontentment under his rule were inherited from previous leaders, it was a failure of his not to adapt to the conditions, prioritising his ambition over maintaining peace in the Regno, as the Church had hoped he would. From a dynastic perspective, there was greater success. The aims here were to expand Capetian power, support his family and secure their futures. There is little doubt he achieved this. It must be acknowledged, as a family, there were some failed exploits, such as the Tunis Crusade in 1270. Despite this, Charles managed to expand his dynasty’s power by acting as a diplomat, securing tactical marriages and developing extensive influence in the Mediterranean. On top of this, he created an ideology that shaped future generations, and his efforts were crucial to securing the holy prestige of his family. These positive achievements, however, can largely be seen in retrospect and perhaps Charles’ himself would not have realised the extent of his success in this domain during his lifetime. This considered, it is clear Charles was more successful from a dynastic perspective than a papal. This is not to say he was a complete success, but it is reductive to refer to him a failure. Daisy Gant has just completed her 3rd year of a BA in History at University College London (with a year abroad at the University of Pennsylvania). Notes: [1] J. Dunbabin, Charles I of Anjou: Power, Kingship, and State-Making in Thirteenth Century Europe (London: Routledge, 1998); D. Abulafia, The Western Mediterranean Kingdoms 1200-1500: the Struggle for Dominion. (Essex: Pearson Education Limited, 1997) [2] W. E. Welsh, 'Papal Strongman: Charles of Anjou', Medieval Warfare, Vol. 6, No. 2 (2016), p. 20. [3] J. Dunbabin, The French in the Kingdom of Sicily, 1266-1305 (Cambridge: Cambridge University Press, 2011), p. 7. [4] 'Matthew Paris on the Popes and Staufer Italy 1245-1269' in J. Bird, E. Peters and J. M. Powell (eds.), Crusade and Christendom: Annotated Documents in Translation from Innocent III to the Fall of Acre, 1187-1291 (Philadelphia: University of Pennsylvania Press, 2013), p. 409. [5] 'Urban IV to Louis IX Against Manfred, Ecce fili carissime, 1264' in Bird, et at., Crusade and Christendom, p. 412. [6] Dunbabin, The French in the Kingdom of Sicily, p. 8. [7] S. Runciman, The Sicilian Vespers: A History of the Mediterranean World in the Later Thirteenth Century, (Cambridge: Cambridge University Press, 1958), p. 125. [8] M. Lower, The Tunis Crusade of 1270: A Mediterranean History (Oxford: Oxford University Press, 2018), p. 68. [9] Abulafia, The Western Mediterranean Kingdoms, p. 57. [10] As quoted in Ibid, p. 57. [11] The Chronicle of Pedro III of Aragon in Bird, et al., Crusade and Christendom, p. 425. [12] Lower, The Tunis Crusade of 1270, p. 59. [13] Dunbabin, Charles I of Anjou, pp. 64-70. [14] Runciman, The Sicilian Vespers, p. 130. [15] Quoted in J. L. Goff, Saint Louis (Indiana: University of Notre Dame, 2009), p. 587. It is also discussed here how Louis himself made use of the titles, showing it was not just Charles who felt this bond. [16] Runciman, The Sicilian Vespers, p. 256. [17] 'Louis IX Writes to France Explaining the Failure of His Crusade, 1250' in Bird, et at., Crusade and Christendom, p. 373. [18] Lower, The Tunis Crusade of 1270, pp. 68-70. [19] 'A Problem of Governance?: Pope Gregory X, Charles of Anjou, and the Latin Kingdom of Jerusalem' in P. B. Baldwin, Pope Gregory X and the Crusades (Woodbridge: Boydell Press, 2014), pp. 104-136. [20] Ibid. p.107. [21] D. Abuladia, Charles of Anjou Reassessed, Journal of Medieval History, Vol. 26, No. 1 (2000), pp. 93-114. [22] Dunbabin, The French in the Kingdom of Sicily, p. 189. [23] Abuladia, Charles of Anjou Reassessed, p. 110. [24] As quoted in Dunbabin, The French in the Kingdom of Sicily, p. 190. [25] Ibid, p. 191. [26] As quoted in Le Goff, Saint Louis, p. 587. [27] Runciman, The Sicilian Vespers, p. 255.
- Historical scholarship: a linguistic artefact and product of the creative imagination of its author?
Postmodernist readings of ‘linguistic artefact’ refers to the platitudes and metaphors employed by an author, which supersedes their narrative voice over scientific empirical evidence.[1] Postmodernism is an ambiguous philosophy which began in the 1940s within art theory, but reached its ascendency in the 1960s across other disciplines; Christopher Butler alludes to postmodernism as ‘a loosely constituted and quarrelsome political party’.[2] Postmodernists are sceptical of ‘Grand’ narratives (or metanarratives) in Western culture, epitomised by modernist thought, and specifically challenge the Enlightenment (1715-1789).[3] This essay separates linguistics from ‘linguistic artefacts’ and ‘creative imagination’, concluding that historical scholarship is entirely the product and creative imagination of its author. ‘Linguistic artefact’ is an unfocused term but this essay considers it scholarship which is distinguished within its field. Theories and debates raised by Michel Foucault, Hayden White and Frederick Jameson will be evaluated to assert this claim. Ultimately, historical scholarship is based upon the subjectivity of language and Foucault’s power-knowledge theory. Therefore, historians cannot be objective in their historical accounts as a ‘story telling’ narratives inevitably permeate their scholarship. Postmodernists elevate language as the ‘fundamental phenomena of existence’; this began with Nietzsche’s arguments on truth, language and society in the nineteenth-century.[4] Nietzsche’s theory is corroborated by Max Weber, Sigmund Freud, Jacques Derrida and Michel Foucault. His premise devalues historicity as a scientific method, but infers that historical scholarship, and other written narratives, are linguistic artefacts, product of the creative imagination of its author. This is only convincing to a limited extent because the postmodernist assumption - that truth is constructed by metaphors, metonyms and anthropomorphisms - is evidence of linguistics but not linguistic artefacts.[5] Postmodernist critiques of science are based on subjectivity, explored through epistemological and ideological arguments. Their theory determines that historical scholarship must be a linguistic artefact as it cannot be a science. Contrarily, although historical scholarship requires the creative imagination of its author (in terms of language, subject and methodological choice), not all works can be elevated to the status of ‘linguistic artefact’. This reasoning requires a separation of linguistics and linguistic artefact as scholarship is entirely based upon linguistics; this essay argues that only seminal or pivotal works can be considered linguistic artefacts. This argument devalues postmodernist theory as it contends certain theories or works are more valuable than others; it subverts postmodernist assertions on ‘truth’. They contend that illusionary understanding of ‘truth’ becomes canonical, as authority assigned to certain narratives creates false perceptions of ‘truth’. Foucault’s power-knowledge theory asserts that authority grants power to people, making their words appear true, and in turn, they receive more power.[6] These contentions are convincing but still cannot provide a basis for the application of ‘linguistic artefacts’ to all historical scholarship. Conjoining linguistics and history is inescapable; historians write to convince of an argument, which requires linguistic prowess and rhetorical techniques. This is not an example of philology, but of a practice of history which uses persuasive devices to establish debates. Epistemic virtues are unavoidable, which determine how historical analysis cannot escape the historians ‘context’.[7] Anthropologist, Clifford Geertz (1926-2006), disseminates postmodernist linguistic authority (1980s); authorship, style, narratives, metaphors and fiction.[8] By nature, historical scholarship is written to persuade. To Geertz, this makes it entirely a linguistic artefact as it relies on creative imagination as opposed to objective truth. This includes bold proclamations, metaphors, over sexed language; it is designed to draw in readers. Therefore, for postmodernists to read historical scholarship as a linguistic artefact does make sense, but it is an oversimplification of the historian’s work, which negates consideration of history as a practice of ‘reading, thinking, discussing and writing’.[9] Postmodernist thought is centred upon morals regarding objective truth, however, historicity does not aim to fictionalise empirical evidence with narratives. Therefore, redefining ‘linguistic artefacts’ according to the same definition as cultural artefacts is more appropriate in determining what historical scholarship is and subverts postmodernist narratives on ‘morals’. Postmodernist criticism of ethnography and its subjectivity is applied to historical scholarship, which poses moral problems in written knowledge.[10] Postmodernists consider the need for historical interpretation and self-distanciation, to ensure the author remains virtuous in an attempt to achieve objective truth.[11] Lyotard (1979) postulates that the ‘advent of postmodernism’ is determined in its shifts from truth to fictitious narratives.[12] These assertions are convincing as written history does inform more on its author than the period in question, as scholarship is created through linguistics and creative imagination; this gives rise to debates on historical authenticity. Roy D’Andrande’s ‘Moral Models in Anthropology’ critiques postmodernists interpretations of objectivity and subjectivity as he believes the moral codes they base their arguments upon are themselves subjective. He insists that the goal of a scholar is to remain as objective as possible, which creates a distinction between moral and objective models.[13] D’Andrande’s critique of postmodernism is conclusive as historicity aims to be impartial and authentic in its production. The creativity of authors is in their application of methodology, theories and narratives; not an aim to spread false narratives. Chirstopher Norris furthers this contention, claiming that Lyotard, Foucault and Baudrillard are ‘too preoccupied [by] moral judgements’.[14] Postmodern theory disseminates that historical writing ‘is always autobiographical’.[15] Investigation and exploration of the author means their historical scholarship is entirely the product of semiotics and linguistics through a postmodernist reading. Their creative imagination is subjective as when dealing with historical facts, they construct narratives to ‘fill the gap’ in knowledge; their investigation is shaped by their language, narratives and arguments. Reed defines the ‘context of investigation’ as the authors social and intellectual context; their investigation is shaped by their experiences, social identity, opinions and memories.[16] Therefore, despite the historians aim to remain objective, it is impossible for them to discount their social and intellectual context. However, it does not automatically grant all scholarship the nebulous title of a ‘linguistic artefact’. When defining linguistic artefacts according to cultural artefacts, it is understood that only works of importance or common knowledge can be linguistic artefacts. Considering ‘linguistic artefacts’ as significant, seminal and pivotal works disrupts postmodernist thought. Sociologist Jean Baudrillard asserts that ‘we have now moved into an epoch […] where truth is entirely a product of consensus values’.[17] This relates to historical scholarship as pivotal works are examples of sign value. Status of linguistic artefacts can only be rewarded to authors who have created a disciplinary identity. Within written history, key examples are: Livy’s entire account of Roman history; Marx’s communist manifesto; E. P. Thompson’s coinage of ‘history from below’ and Leopold von Ranke’s founding of source-based history. It is irrefutable that eloquent prose presents clearer, more persuasive arguments and Machiavelli denotes the importance of rhetoric, yet few historians contribute to linguistic artefacts, as ‘artefacts’ by nature objects of cultural significance. This is not to demerit or devalue the work of historians, as seminal and progressive works, such as Veblen’s The Theory of the Leisure Class, have not reached this status. However, all contributions to history have awarded new linguistic ways to present quantifiable information. It is pioneering authors who have created linguistic artefacts, as they stand separate to a saturated field of countless scholarship. Creative imagination of authors can be translated in a similar way. Innovative expressions in historical scholarship demonstrate creative imagination, but the term can also be awarded to other contributors, who work within the varieties of history. This consists of oral history and public history. Although written history can be evidence of both linguistic artefact and creative imagination of an author, authors can have creative imagination isolated from linguistic artefacts. Frederic Jameson discusses historicity as a concept which considers history ‘a perception of the present’. [18] This valuable contribution to postmodernity reinforces notions of a pluralistic society with multiple truths which are difficult to distinguish from reality. Jameson explored historicity in relation to consumption and production of literary texts. This synergism parallels with the use of narrative and linguistics in historical scholarship. When applying postmodernity to historical scholarship, it is confronted with the issue of language and creative imagination. Postmodernism discerns that there is no real truth. This in turn depends upon the creative imagination of authors to impose structure and coherence upon the past. As a result, postmodernists are accepting of the fact that history and culture can only be understood through the certain access points language allows. This infers that historical scholarship is entirely the product of the creative imagination of its author. The significance assigned to linguistics is a form of sign value and as postmodernists insist that grand narratives have no meaning, maintains language as a form of historical sign value. However, contrary to postmodernist thought, this does not devalue historical scholarship, as it is designed to be persuasive, which makes rhetorical methods a valuable tool to historians. To the postmodernist, historical scholarship cannot be empirical as attempts to create new knowledge is informed by ‘story telling’ narratives and investigator’s context. Rosenau’s Deconstruction Analysis insists that authors should write to minimise interpretations.[19] They should also employ new and unusual terminology to avoid familiar and interpretative language.[20] Using this postmodernist argument, it is decisively apparent that coinage of terms creates linguistic artefacts, according to this essays definition. Rosenau’s argument evidences contradictions within postmodernism. Even more so in her assessment of postmodernism as a theory, when it stands to reject all theory; and again in postmodernism rejecting the idea of truth, whilst themselves claiming their ‘non-theory’ is true. The ‘postmodern turn’ (coined by Reed) questioned whether authors could integrate the context of investigation into the context of explanation to create ‘true social knowledge’.[21] This thought was prevalent in cultural and linguistic anthropology and is the primary contention of this essay. Applying this thought emboldens the assertion that not all historical scholarship can be considered a ‘linguistic artefact’. Corroborated by the power-knowledge theory, linguistic artefacts can only be works which are authoritative in their field or form part of popular knowledge. Ultimately, postmodernists do not convincingly dismiss the scientific method as there are many contradictions in their beliefs. They critique language as a form of communication, however, they determine this a moral problem. Historical scholarship is removed from this moral dilemma as the aim of the author is to remain as objective as their language choices and context allows. Knowledge being objective depends on empirical support and historical scholarship achieves this.[22] Postmodernists advocate polyvocality, which maintains that there are multiple accepted truths, from different perspectives. This contention would consider all historical scholarship to be a linguistic artefact but it is unconvincing as not all scholarship is pivotal. Postmodernist reject impositions of hegemonic values constructed during the Enlightenment. However, these theories are needed for advancements in representation, such as gender and women’s history, and the history from below. Postmodernists believe that they are defending marginalised voices and cultures; ironically, it is pivotal works and the construction of grand narratives which achieves their aim.[23] Overall, all historical scholarship is entirely the product of the creative imagination of its author through semiotics, but only crucial works can be linguistic artefacts. Abigail Blackwell is currently undertaking an MA in Public History at Newcastle University. Full title when assigned: To what extent is a work of historical scholarship a linguistic artefact and product of the creative imagination of its author? Notes: [1] Barbara Czarniawska, ‘Linguistic Artefacts at Service of Organizational Control: Views of the Corporate Landscape’, Symbols and Artefacts, p. 347. [2] Christopher Butler, Postmodernism: A Very Short Introduction (Oxford, 2003), p. 2. [3] Roy Boyne, ‘The Theory and Politics in Postmodernism: Introduction’, Postmodernism and Society, p. 39. [4] Lawrence Kuznar, Reclaiming a Scientific Anthropology (Lanham, 2008), p. 78. [5] Friedrich Nietzsche, On Truth and Lie in an Extra-Moral Sense (New York, 1954), pp. 46-47. [6] Barbara Townley, ‘Foucault, Power/Knowledge, and Its Relevance for Human Resource Management’, The Academy of Management Review, Vol. 18, no. 3 (1993), p. 529. [7] Ibid. [8] Clifford Geertz, ‘The Anthropological Life in Interesting Times’, Annual Review of Anthropology, p. 11. [9] Herman Paul, ‘Performing History: How Historical Scholarship is Shaped by Epistemic Virtues’, History and Theory, Vol. 50, No. 1 (2011), p. 1. [10] Ibid. [11] Herman Paul, ‘Distance and Self-distanciation: Intellectual Virtue and Historical Method around 1900’, History and Theory, Vol. 50, No. 4 (2011), p. 104. [12] Jean-François Lyotard, The Postmodern Condition : A Report on Knowledge (Manchester, 1984), p. 89. [13] Roy D. Andrade, ‘Moral Models in Anthropology', Current Anthropology, Vol. 36, No. 3 (1995), p. 402. [14] Chris Norris, What’s Wrong with Postmodernism? (London, 1990), p. 50. [15] Dick Geary, ‘Labour History, the “Linguistic Turn” and Postmodernism’, Contemporary European History 9, No. 3 (2000), p. 445. [16] Isaac Reed, ‘Epistemology Contextualized: Social-Scientific Knowledge in a Postpositivist Era’, Sociological Theory 28, No. 1 (2010), p. 28. [17] Norris, Postmodernism? , p. 169. [18] Frederick Jameson, Postmodernism, or, The cultural logic of late capitalism - Chapter 9: Nostalgia and Film, (Duke, 1991), p. 284. [19] Gérard Lenclud, ‘P. M. Rosenau, Post-Modernism and the Social Sciences. Insights, Inroads, and Intrusions’, Homme, Vol. 33, No. 125 (1993), p. 161. [20] Ibid, p. 142. [21] Reed, ‘Epistemology’, p. 25. [22] Ibid. [23] Pamela Jane Smith, ‘The Archaeology of Bruce Trigger: Theoretical Empiricism, McGill-Queen’s UP’, Cambridge Archaeological Journal, Vol. 18, No. 2 (2008), p. 446.
- Policy or Place: which determined the genocide of European Jews?
The genocide of the European Jews is an issue debated in much historiography – through opposing schools of thoughts arguing for an inevitable or gradual view of the systemic mass murder. In the process of Nazi rule hundreds of thousands of people were shoved around like so many pieces on a chessboard in pursuit of their vision of a racially reorganized eastern Europe[1], and it is a culmination of years of discrimination, hatred, deportation, and murder which led to the death of 6 million European Jews. The problem remains, however, of whether the policy followed by the Nazi government, or place – where the European Jews lived and how the subsequent murder thus took place – played a more significant role in determining the genocide. Whilst, as Mommsen argues, the eventual step towards mass destruction occurred at the end of a complex political process[2] and thus the two factors will be undeniably linked, I believe that the continued campaign of hate pursued by Hitler and his Nazi party was translated into a policy of murder and genocide across Europe which could not have occurred without the consistency of the policy created by Nazi infighting, anti-Semitism and determination to clear Germany and its lebensraum of ‘the Jewish problem’. The genocide of the European Jews was preceded by years of systematic persecution through political policy – the effects of which grew in intensity, violence, and social acceptability and reached the conclusion of the ‘the Final Solution’ and the orders to exterminate Europe’s entire Jewish population. For Goldhagen, the acknowledgement that what occurred within Nazi German was so unimaginable and adverse to Western post-enlightenment values means that historians must accept that there was something fundamentally different about the citizens of Germany – he argues that this is their rampant antisemitism. Virtually no evidence exists to contradict the notion that the intense and ubiquitous public declaration of antisemitism was mirrored in people’s private beliefs[3], and the fact that virtually no antisemitic crees were challenged in Germany during this period gives further legitimacy to this idea. Therefore, the Nazis based their intentions and policies on an articulated, shared understanding of Jews and their eliminationist, racial anti-Semitism[4], and this systemic anti-Jewish sentiment existent not just within the party, but within the country itself led to genocide. Even as the violence intensified and turned to murder by ‘ordinary men’ as members of killing squads such as Police Battalion 101 exemplified in individual historical accounts, ‘we know how surprisingly easy it was for members of the extermination squads to quit their jobs without serious consequences for themselves'[5] from accounts unveiled in the Nuremburg trials. A lack of resistance from the German citizens and willingness to kill of ordinary men in police battalions, local militias, and SS cells despite opportunity to resume other posts without fearing for their life suggests an anti-Semitism existent within Germany that was fuelled by policies of persecution throughout the 1930s and early 1940s. The persecution of the Jews within Nazi Germany itself was multifaceted, and Germans witnessed the promulgation of almost two thousand laws and administrative regulations that degraded and immiserated the country’s Jews in a manner and degree which no minority had suffered for hundreds of years.[6] Verbal violence and anti-Jewish propaganda asserted anti-Semitic views and gave the assault on Jewish bodies a perceived legitimacy, whilst, as Goldhagen argues further, suggested the dire fate which might face them. The Nuremberg Race Laws of 1935 enforced a social separation which only grew worse, and Jews lost the right to vote. Removing democratic rights and alienating the German Jews in their home country served to ‘sink them into a state of hopelessness and to isolate them from the larger society in which they had moved freely but a few years earlier. They made Jews socially dead'.[7] In November 1938, the Kristellnacht pogrom made clear that the Jews had no place in Germany, and that the Nazis were willing to use extreme violence in realising their aims - as a general ‘cleansing’ of Germany of Jewish synagogues, Kristallnacht was a proto-genocidal assault.[8] After the introduction of the Star of David badge in 1941 Germans had greater ability to recognize, monitor and shun those bearing the mark. The Nazi campaign of isolationism, hatred and alienation resulted in emigration in droves - of the 525k Jews living in Germany in January 1933, almost 130k emigrated during the next 5 years,[9] often forfeiting all land and property for a perceived sense of safety outside of Nazi Germany. Thus, Mommsen argues ‘if the German citizens share a responsibility - it is to be found in the passive acceptance of the exclusion of the Jewish population, which prepared the way for the Final Solution.[10] Whilst Nazi policy succeeded in forcing roughly half of the Jewish population from Germany, Nazi high command and the Fuhrer remained unsatisfied with the ‘Jewish problem’. Kristellnacht had been an ‘ominous portent of the future'[11] and violence against Jews was part of everyday German, and soon to be European, life. Despite source material suggesting that genocide was not seen as a viable solution until around 1941, from the perspective of violent policy the genocide of the European Jews appeared almost an inevitability. Beyond a policy of persecution which set the scene for and undeniably played a large role in the genocide of the European Jews, a policy of extreme violence against Jews in Germany and other ‘untermenschen’ also contributed heavily to the genocide of the European Jews by creating a culture of violence in Nazi Germany and occupied Europe where murder and terror became parts of everyday life. For Mommsen, the realization of the Final Solution became psychologically possible because Hitler's phrase concerning the 'destruction of the Jewish race in Europe' was adopted as a direct maxim for action, particularly by Himmler.[12] The language used to describe the violence and eventual murder was coded in an attempt to protect the persecutors from possible psychological harm, thereby ‘neutralising’ and normalising it. As Eichmann documented in his trial, ‘all correspondence referring to the matter was subject to rigid “language rules,” and, except in the reports from the Einsatzgruppen, it is rare to find documents in which such bald words as “extermination,” “liquidation,” or “killing” occur.[13] This technique was said to be an ‘enormous help’ in maintaining order and sanity amongst departments, as well as in concealing said violence when the Nazi ‘projects’ received international visitors. As Arendt describes, when Eichmann was sent to show the Theresienstadt ghetto to International Red Cross representatives from Switzerland—he received, together with his orders, his “language rule,” which in this instance consisted of a lie about a non-existent typhus epidemic in the concentration camp of Bergen-Belsen, which the gentlemen also wished to visit.[14] Despite the destruction of the Jews being the mastermind and idea of Hitler himself, the coded language extended to the Fuhrer when it came to protecting himself from the reality of violence. Research of historian Gerald Fleming emphasises Hitler’s arrangement surrounding the use of camouflage language when discussing genocide - when confronted with the actual consequences of the destruction of the Jews, Hitler reacted in exactly the same way as his subordinates, by attempting not to be aware of the facts or suppressing his knowledge. Only in this way could he give free rein to his anti-Semitic tirade.[15] The clearest example of ‘ordinary’ violence and its gradual decline to genocide is in the actions of the Einsatzgruppen, whose violence eventually provided the link to the factory techniques of murder via Operation Reinhard, as the use of gas vans began as a transitional stage to make the psychological toll on the soldiers participating in mass shootings slightly lighter. And yet the Einsatzgruppen were trained killers; the testimonies of police battalions such as Police Battalion 101, documented by Browning, demonstrate how ‘ordinary men’ became desensitised to violence in Nazi Germany and occupied Europe in such a way that hundreds of working-class Hamburg men, with seemingly no previous feelings of anti-Semitic hate, were faced with the decision of how to ‘tactically avoid the shooting of infants and small children.[16] Violence thus permeated almost every sphere of Nazi occupation and ‘grass roots perpetrators became ‘professional killers';[17] but the Euthanasia project of 1939 and the thousands of disabled men, women and children killed in its implementation furthers this. Those who the Nazis marked for slaughter in the Euthanasia program were thought to be far less of a threat to Germany than the Jews. Therefore, Goldhagen argues, ‘unlike the Euthanasia program’s victims, the Jews were considered to be wilfully malignant, powerful, bent upon and perhaps capable of destroying the German people in toto'[18], and to believe that the Nazis would have carried out this particular programme of systemic murder, and not that of the European Jews is almost deluded. This is further evidenced by Browning’s research which suggests that for expertise and assistance in building and operating the extermination center at BelZec, Globocnik was able to draw on personnel from the "euthanasia program" in Germany.[19] A gradual normalisation of violence through a series of policies, growing in severity allowed for the descent into unimaginable murder, which led to the genocide of the European Jews. A further aspect of policy which contributed largely to genocide was the infighting which existed within Hitler’s inner circle – as military leaders and Reich ministers created policy which grew gradually more disturbing and violent to appease and gain favour with the Fuhrer. Whilst events occurred with Hitler’s approval and encouragement, as Mommsen argues, given the ideological framework and the existence of machinery to trigger off 'spontaneous' anti-Semitic outrages, they were first conceived by the rival satraps around Hitler, who were unscrupulously determined to outdo one another in implementing National Socialist policies, and thus to please the Führer.[20] Therefore, Hitler’s dream became reality because of the ambitions of members of his inner circle, such as Himmler and the SS to achieve millennium in Hitler’s lifetime and prove their indispensability to the cause. In Eichmann’s description of events, he is sent to see Heydrich who gave him his instructions for the ‘liquidation of the Jews'[21] - Heydrich had been working in this direction for years and in 1941 had eliminated rival contenders for control of the ‘Jewish question’. This infighting was inspired by a desire to enhance their own prestige and extend authority on important motives for many within the party, with Gauleiters competing in rival attempts to declare ‘their’ districts ‘Jew-free’ – competition which ‘played a conspicuous role in the genesis of the Holocaust.[22] Individual Judenreferates felt the need to justify their existence by introducing cumulative anti-Jewish policy which were defamatory and economically superfluous, and as an anonymous member of Police Battalion 101 discusses in Browning’s text, that battalion leaders were ever anxious to get credit for his company's body count.[23] The policy which led to the genocide of the European Jews was not always ideologically driven – in fact in some instances it can be argued that the murder would have occurred earlier if it wasn’t for the logistical issues which faced the Nazi German state in the forms of economic and foreign policy, in spite of its preoccupation with anti-Semitic rhetoric and the destruction of Europe’s Jews for the creation of a ‘racially pure’ Volk. Despite increasingly violent persecution towards the Jewish population, especially in Russia and occupied Poland, Goldhagen argues that genocide was still not a ‘viable’ option in 1939-40, for ‘as long as Germany had to reckon with the responses of other powerful countries, genocide was not a practical policy.[24] Furthermore, despite Hitler’s obvious desire to take bitter revenge of ‘World Jewry’, historians such as Mommsen argue that statements such as Goering's report at the infamous meeting at the Reich Air Ministry on 12 November 1938, in which he quoted Hitler: 'If the German Reich comes into conflict with foreign powers in the foreseeable future, it goes without saying that in Germany too our main concern will be to settle accounts with the Jews',[25] were intended as threats primarily to exert pressure on the Western nations, particularly Britain and the United States, and are therefore connected with the hostage argument, which had surfaced as early as 1923. Furthermore, whilst the pace of the genocide of the European Jews can be said to be affected heavily by acute awareness of international relations due to the Nazi state’s reluctance for immediate warfare, the pace was encouraged by economic policy, which largely benefited from the removal of Jewish financial rights, and thus removal of Jews altogether. Whilst there were phases during which the pace of the extermination programme was slowed, to permit the temporary exploitation of the prisoners by means of forced labour,[26] in cases such as the RSHA’s attempts to establish an SS economic empire, in industrial and banking circles as the 'spontaneous' and legalized 'Aryanization' measures and the exploitation of the labour of concentration camp prisoners offered advantages to many and among the commercial middle class which, especially in the early years of the regime, tried hard to intensify the repressive measures against their Jewish competitors,[27] legal and illegal economic gains at the expense of the destruction of the European Jews were a part of daily life in the Third Reich which seemed to only further persecution and violence by benefiting those partaking in the violence. Whilst policy played a more significant role in the genocide of the European Jews, even if just by sheer volume, place cannot be removed from the impact of policy, and place itself is still important, particularly the events which occurred in Russia, can be said to have had an undeniably large impact on the fate of the European Jews by setting a precedent for murder and violence through The Commissar Order and its direct results. When Heydrich received a letter from Goring commissioning him to prepare a Gesamtlösung of the Jewish question within the area of German influence in Europe,[28] he was able to do with this with relative ease as he had been entrusted for years with the task of prepping the ‘Final Solution’, beginning with the war with Russia which had left him in charge of mass killings by the Einsatzgruppen in the East. The attack on the Soviet Union on the 22nd of June 1941 and the early successes of the German armies led to the Commissar Order and the deliberate use of the Einsatzgruppen to liquidate Jewish population groups in the occupied areas, signalling the start of a new phase in violence against Europe’s Jews. Whilst the Einsatzgruppen were careful to avoid giving racial reasons for these murders in their early reports, 1.4 million Russian Jews were murdered[29], and between late July and mid-August, Himmler toured the eastern front, personally urging his men to carry out the mass murder of Russian Jewry.[30] On the eve of Operation Barbarossa, Major Weis of Police Battalion 309 gave orders for his men to proceed ruthlessly against Jews regardless of sex, and in Bialystok, a particularly abominable example of mass violence began, as ‘the next day, thirty wagonloads of corpses were taken to a mass grave. An estimated 2,000 to 2,200 Jews had been killed'[31]. Russia as a place thus had a profound and significant role on the genocide of the European Jews as the violence that ensued set the scene for the murder which was to come. Furthermore, the turn of tide in the war with Russia and higher death rate allowed Himmler fewer human reserves than anticipated in Auschwitz, a then massive prisoner and munitions centre for utilising Soviet POWs since Autumn 1941, and thus the influence of place meant that just a week after the Wannsee Conference, Himmler issued his instruction to 'equip' the SS concentration camps primarily with German Jews.[32] The mass murder of Russian Jews became European - for Birkenau camp, where the technology of gassing had been developed with Soviet prisoners of war as the victims,was now to be part of a comprehensive programme for genocide. In a similar vein, the influence of place on the genocide of the European Jews can be illustrated by the mass deportations and murders which occurred in Poland and set the scene for the horrors of what was to come. The SS was able to take almost complete control of initiatives in the ‘Jewish question’ in the Polish territories with Himmler at the helm, and the fate of the Jews thus became linked with Generalplan Ost.[33] Methods that were later extended to the Old Reich were first tested in the Generalgouvernement, and the deportation programme which was in place encouraged the Gauleiters of the Reich to send their Jews to the Generalgouvernement, where the conditions in the ghettos were appalling. In the Lodz Ghetto Police Battalion 101 was given a standing order to shoot "without further ado" any Jew who ignored the posted warnings and came too close to the fence. This order was obeyed.[34] The policy of ghettoization in Poland was described in Summer 1940 by Nazi official Greiser as ‘untenable from the 'point of view of nutrition and the control of epidemics',[35] and it was these conditions which led those in the Nazi high command to begin to consider genocide as a serious viable option, as the ‘benevolent’ alternative to the horrors of the ghettos; as SS-Obersturmbannführer Hoppner stated ‘for it should be seriously considered whether it might not be the most humane solution to dispose of those Jews who are unfit for work by some quick-acting means. At any rate this would be more agreeable than letting them starve'.[36] Therefore, whilst policy played a large role in eventual genocide, place in terms of Poland, the ghettos and the Polish Jews, whose ethnicity placed them under the command of the ruthless Reich Commissar for the Strengthening of Germandom Himmler, led to genocide. The resulting orders for murder are evidenced by the testimonies of Browning’s Police Battalion 101, as on June 20th, 1942, the battalion received orders for a "special action" in Poland, and on July 11th, they had the task of rounding up the 1,800 Jews in Jozefow - only the male Jews of working age were to be sent to one of Globocnik's camps in Lublin. The women, children, and elderly were simply to be shot on the spot.[37] Between the fall of 1941 and the spring of 1945, over 260 deportation trains took German, Austrian, and Czech Jews directly to the ghettos and death camps "in the east" (i.e., Poland and Russia) or to the transit ghetto of Theresienstadt north of Prague and from there "to the east.[38] No one participating in the events described in this report could have had the slightest doubt about what he was involved in, namely a mass murder program to exterminate the Jews.[39] Karl Streibel, a key member of Globocnik’s Operation Reinhard staff, visited the POW camps and recruited Ukrainian, Latvian, and Lithuanian "volunteers" (Hilfswilil ge, or Hiwis) who were screened on the basis of their anti-Communist (and hence almost invariably anti-Semitic) sentiments, offered an escape from probable starvation, and promised that they would not be used in combat against the Soviet army.[40] These men constituted a portion of the manpower from which Globocnik would form private armies for his campaign of ghetto-clearing, a further example of place and policy intertwining in the tragic fate of the European Jews. The documents from the Wannsee Conference are usually equated with the immediate launch of the genocide campaign throughout Europe and demonstrate the intent to murder Jews on an unimaginable scale, even in places which due to Nazi Germany’s eventual defeat remained untouched by the genocide. The aim was to ‘cleanse German living space of Jews in a legal manner',[41] and thus whilst historians of sound moral judgement recognise that nothing morally legal occurred in the genocide of Europe’s Jews, policy did play the most significant role in the genocide of the European Jews, as even in the most top secret of documents – it was the method the Fuhrer wished to follow. Hitler’s fascism was one he realised could only be achieved through politics, and thus his persecution and eventual murder of the European Jews largely followed a similar path. Whilst the war against Russia, and the Ghettos of Poland played an undeniably large role in the eventual genocide, their importance can be attributed to a continual policy of violence and discrimination in the East, which added to the culmination of the realisation of Hitler’s sadistic dream ‘the elimination of European Jewry’. The existence of years of systemic persecution through political acts, the persistence of violence in almost all areas of Nazi policy, party infighting and the foreign and economic policy required to effectively run a state demonstrate how policy was detrimental to the livelihoods of European Jews, as it eroded their lives and sense of hope and led eventually to death, and how place was merely a second thought in a dictatorship which had little regard for individual place and instead preferred the ‘racial purity’ of a German ‘Volk’. Katie Heggs has just completed her first year of a BA in History and Politics at the University of Cambridge (Churchill College). Full title when assigned: Which factor played a more significant role in determining the genocide of the European Jews: policy or place? Notes: [1] C. Browning, Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland (1992), p. 39 [2] Mommsen, ‘The Realization of the Unthinkable: The “Final Solution of the Jewish Question” in the Third Reich’, in H. Mommsen, From Weimar to Auschwitz (1991), p. 243 [3] D. Goldhagen, Hitler's Willing Executioners: Ordinary Germans and the Holocaust (1996), p. 30 [4] Ibid., p.132 [5] Hannah Arendt, Eichmann in Jerusalem : A Report on the Banality of Evil (2006), p. 50 [6] D. Goldhagen, Hitler's Willing Executioners: Ordinary Germans and the Holocaust (1996), p. 138 [7] Ibid. [8] Ibid., p. 141 [9] Ibid., p. 139 [10] Mommsen, ‘The Realization of the Unthinkable’, p. 249 [11] Goldhagen, Hitler's Willing Executioners, p. 140 [12] Mommsen, ‘The Realization of the Unthinkable', p. 234 [13] Arendt, Eichmann in Jerusalem, p. 48 [14] Ibid. [15] Mommsen, ‘The Realization of the Unthinkable', p. 233 [16] Browning, Ordinary Men, p. 25 [17] Ibid., p. 15 [18] Goldhagen, Hitler's Willing Executioners, p. 143 [19] Browning, Ordinary Men, p. 50 [20] Mommsen, ‘The Realization of the Unthinkable', p. 227 [21] Arendt, Eichmann in Jerusalem, p. 47 [22] Mommsen, ‘The Realization of the Unthinkable', p. 235 [23] Browning, Ordinary Men, p. 16 [24] Goldhagen, Hitler's Willing Executioners, p. 144 [25] Mommsen, ‘The Realization of the Unthinkable', p. 232 [26] Ibid., p. 246 [27] Ibid. [28] Arendt, Eichmann in Jerusalem, p. 47 [29] Mommsen, ‘The Realization of the Unthinkable', p. 244 [30] Browning, Ordinary Men, p. 11 [31] Ibid., p. 30 [32] Ibid., p. 1 [33] Mommsen, ‘The Realization of the Unthinkable', p. 239 [34] Browning, Ordinary Men, p. 41 [35] Mommsen, ‘The Realization of the Unthinkable', p. 242 [36] Ibid. [37] Browning, Ordinary Men, p. 55 [38] Ibid., p. 36. [39] Arendt, Eichmann in Jerusalem, p. 47 [40] Ibid. [41] Wannsee Protocol - January 20, 1942; Translation, https://prorevnews.wordpress.com/2014/06/30/minutes-of-the-wannsee-conference/
- Was emancipation during the Civil War driven more by military necessity than moral conviction?
The standard popular narrative of the American Civil War is that it was a war fought to end slavery, pitched between the slaveholding seceded South, the Confederate States of America, and the free, abolitionist North, the Union, upon the election of the antislavery advocate Abraham Lincoln. With the issuing of the Emancipation Proclamation in 1863, the Civil War supposedly became a defining moment in the struggle for racial equality in the USA, with its achievements being enshrined in the subsequent Thirteenth Amendment. Although the war was a result of the bitter contention over the future of slavery, as this traditional depiction suggests, it would be too simplistic to argue that the antislavery stance was driven by moral qualms. Whilst many in the North did object to the ‘peculiar institution’, their emancipationist efforts only came to fruition when it became clear that including free blacks in the armed forces was necessary for a Union victory. The Emancipation Proclamation was preceded by years of hesitancy on the part of the policy makers: Lincoln’s priority was always the Union and while ending slavery was important, it was more of a means to victory than an end in itself. Of course, the different actors on the Civil War stage had different reasons for advancing, or not advancing, emancipation, but the legal thrust of freeing the enslaved people was because of military need. First, it is evident in many of Lincoln’s speeches and letters that emancipation was not at the top of his agenda: the survival of the Union was. This position is plain to see in his 1862 letter to Horace Greeley, who had complained that Lincoln was conceding too much to the “fossil politicians” of the border states: the president declared: “my paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do that.”[1] Some of the policies he enacted follow this cautious approach to emancipation, and prioritisation of the Union. For example, Lincoln issued the Proclamation of Amnesty and Reconstruction in December 1863 to attempt to knit the Union back together. This required a minimum of 10% of the population in a rebelling state to swear an oath to the Constitution and all of the laws and proclamations passed by the Union since the Civil War began. The state would thus be allowed to reenter the Union, to then call a constitutional convention to abolish slavery in the state. Rebels would then be pardoned. This was a very conservative measure, in both the low threshold needed and the fact it did not require the immediate end to slavery. Lincoln hoped that conservatism would keep the Democrats in the fold: his pragmatism and caution meant a principled drive to immediate emancipation was sacrificed.[2] Similarly, he forced the Secretary of War, Simon Cameron, to get rid of a passage about the freeing and arming of slaves in the 1861 annual report, and revoked the orders of some of his generals that freed the enslaved people in the territories they were commanding. Hesitancy, rather than strong morality, seemed to be the overriding theme of Lincoln’s approach, at least in the earlier years of the war, and many were disappointed with him for this: William Lloyd Garrison, for example, claimed that “the President can do nothing for freedom in a direct manner, but only by circumlocution and delay”.[3] Likewise, the Emancipation Proclamation did not free a single enslaved person by itself, which hardly suggests that the prevailing factor fuelling it was any great moral conviction. It freed no enslaved people in the border states, or in those Confederate areas already occupied by Union troops. Thus, the only people it ‘freed’ were those still in areas controlled by the Confederacy: the places where Lincoln’s power did not extend to. The limitations of the order indicate that top policy makers still hoped to conciliate secessionists and keep the border states (Missouri, Kentucky, Maryland, Delaware and eventually West Virginia) on side by not enacting a law that would cost them their property, implying once more that unity was seen as the goal. Lincoln explicitly said that emancipation was a “military necessity, absolutely essential to the preservation of the Union. We must free the slaves or ourselves be subdued. The slaves were undeniably an element of strength to those who had their service, and we must decide whether that element should be with us or against us.”[4] Though opinions were changing towards using African American men in combat, this was driven principally by the circumstances of the war, not because the enslaved were seen by whites as equal citizens deserving of emancipation in their own right. If they could help the Union, they could be freed. Many echoed this view that the Union and victory were the most important things. For example, when Postmaster General Montgomery Blair was appointed to Lincoln’s cabinet, he declared that “I am for the Union, now and forever, and against all its enemies, whether fire-eaters or abolitionists.” While he did not oppose ending slavery per se, he feared that it would further polarise politics and would give the impression that they were fighting a revolutionary struggle, rather than one for the restoration of the Union.[5] This sentiment was felt at a more popular level, too: a white private from New York said that “we must first conquer and then it’s time enough to talk about the dam’d niggers”.[6] Morality was not always at the forefront of the minds of those involved. Nevertheless, the fact that there was a succession of secessions following Lincoln’s election in 1860 suggests that it was at least believed by many that the Republicans were a true threat to the survival of slavery. Before there was even a military necessity for emancipation, many objected to the election of the new president on the grounds that he had campaigned on an antislavery platform. In 1854, Lincoln had denounced slavery as a “monstrous injustice” and attacked Stephen Douglas for his indifference. In a letter to the soon-to-be Vice President of the Confederate States, Alexander Stephens, Lincoln wrote: “you think slavery is right and ought to be extended; while we think it is wrong and ought to be restricted. That I suppose is the rub.”[7] The South knew his speeches by heart and had no doubts about what his stance on slavery meant. Thus, South Carolina, the first state to secede, did so imminently after Lincoln’s election, on December 20th 1860, with little hesitation. It was soon followed by six other states. For all that Lincoln would show himself to prize the integrity of the Union, he ‘refused to yield the core of his antislavery philosophy to stay the breakup of the Union’, according to James M. McPherson.[8] McPherson argues that while the traditional view of Lincoln as the ‘Great Emancipator’ is problematic, in ignoring the agency of the enslaved themselves and potentially stirring up a white myth to deprive African Americans of the credit, to take the other extreme is inaccurate as well: Lincoln did have a huge hand in freeing the enslaved. His role may have been over-exaggerated, but it was still Lincoln who issued the proclamation. Likewise, John David Smith argues that Lincoln ‘chartered a far more linear course toward freedom than his nineteenth century critics and modern historians have recognised’.[9] Many at least believed that the new president held a moral conviction that slavery was wrong and would thus threaten their interests, even if the strength of this morality is debatable. It was their perceptions that were important. That the resulting Proclamation was powerful and became symbolic of freedom cannot be denied: Michael Vorenberg claims that it ‘defined the Civil War as a war for black freedom’.[10] Even though attitudes were changing as the war continued, freeing the enslaved people was still a risky move and surely relied on holding to principles of freedom at some level. When the Confederacy was seemingly winning the war, the need to keep the precariously positioned border states on side was particularly pressing, especially after Arkansas, Tennessee, Virginia and North Carolina seceded. Even when the tide appeared to be turning towards the Union in 1862, it was still important to keep the remaining border states on side, lest they swing the balance. Together, all eight border states contained 65% of the white population of the South, along with 60% of the South’s livestock and crops. Their secession would thus give the Confederacy a huge resource boost. Given that two thirds of the border state representatives signed a manifesto rejecting the proposal for compensated emancipation, giving up efforts to conciliate must have seemed particularly dangerous for the Union. Recruiting free blacks for the army may have been a military necessity, but it was an uncertain tactic nonetheless. The fact that many on Lincoln’s own side were skeptical furthers this; George B. McClellan’s ‘Harrison’s Landing Letter’ showed the divisions of the North over the purpose and direction of the war. He claimed that private property should be respected, and that the war should not be “looking to the subjugation of the people of any state, in any event”.[11] When the context of uncertainty, hostility and risk is examined, it indicates that there may have been other reasons for delay than just a shortcoming in moral conviction. It was no accident that it was precisely these difficult, hard to sustain areas that were excluded from the emancipation proclamation. However, the experiences of African Americans in the armed forces suggests that they were viewed in terms of their military usefulness, rather than as true US citizens that the Union had the moral duty to liberate. Those enrolled with the United States Colored Troops (USCTs) were confronted with racism in the army and found that their experiences of combat and military life were separate and unequal. They did a disproportionate amount of manual labour and were paid less, and when they did fight, they faced much greater risks than their white counterparts. The Confederacy threatened to enslave, re-enslave or execute any black soldiers it captured, and 66% of the black troops at the Battle of Fort Pillow in 1864 were massacred whilst trying to surrender. Throughout the war, over a fifth of African American soldiers died of disease, compared to one in twelve white soldiers: over a third of blacks in the army died, but just 2,751 of these were killed in action.[12] Similarly, a surgeon appointed to a USCT in 1863 remarked that “very few surgeons will do precisely the same for blacks as they would for whites”.[13] An already difficult and scarring situation was made much worse for African American soldiers: if they were freed due to pressing morality, this only extended so far. In addition, many whites remained uneasy about arming blacks. General William T. Sherman was among them: he both did not trust armed blacks, and thought they would lower the morale of his white soldiers. Lincoln again showed the limits to his moral compass with regards to slavery and racism: when meeting with a free black delegation in Washington in 1862, he promoted black colonisation once again, saying that both blacks and whites would benefit from separation: to do otherwise would be “selfish”. He also claimed that “many men engaged on either side do not care for you either way”. This indicates that the former enslaved were often viewed merely as numbers within the armed forces: emancipation did not stretch to any recognition of equality for many. Yet, emancipation was not just a top down phenomenon, bestowed on passive blacks by benevolent whites. It had been fought for by black abolitionists for decades and in the Civil War, many enslaved people voted with their feet. 500,000 enslaved people escaped to Union lines, and about 179,000 of these would serve in the Union army.[14]Sven Beckert argues that ‘American slaves pressed to make a sectional war into a war of emancipation’: the change over time in thinking towards emancipation was largely due to the (formerly) enslaved themselves. For example, even before the fall of Fort Sumter in March 1861, eight runaway enslaved people arrived at a federal garrison in Florida, hoping that in crossing to Union controlled ground, they would gain their freedom.[15] They were thus forcing the issue of emancipation themselves: they had no need for any ‘great emancipator’. The courage demonstrated by the African American soldiers gradually managed to change public perceptions of them too. Abolitionist Angelina Grimké Weld stated that “their heroism is working a great change in public opinion, forcing all men to see the sin and shame of enslaving such men”.[16] While not all shared such a conviction, many were beginning to recognise the worth of having African American soldiers in the military. For example, a woman from Kentucky wrote in late 1862: “I am no abolitionist but I am for closing this war as quickly as possible and if [it] can [only] be done by freeing all the niggers, let them go.”[17] Frederick Douglass claimed that “the opportunity is given to us to be men”.[18] In 1861, he had penned an article called ‘How to end the war’ in which he argued that the enslaved people should be a liberating army. This suggests that military necessity and moral conviction do not have to be separate: Douglass clearly believed that slavery was a moral outrage, but also saw military service as a way to end slavery and prove the rights of the black population to equality. In conclusion, moral conviction was a necessary, but not sufficient, driver of emancipation during the American Civil War: the primary factor was in fact military necessity. Although there was a significant moral component in the coming of the Emancipation Proclamation, emancipation itself did not occur until midway through the Civil War, when it became clear that tapping into the resources of African American manpower could facilitate the Union’s win. The liberating effect of the Proclamation is undeniable, but it would be inaccurate to over emphasise the moral reasoning behind it, at least on the part of those in power. Lincoln himself said over and over that the Union was at the core of everything he did: emancipation was not only an end in itself, but also a means to victory. Of course, emancipation was not just a ‘great man’ story; thousands of enslaved people forced the administration’s hand by voting with their feet, and for years black and white abolitionists alike had been agitating for freedom. But, as a policy, military necessity was the main driver of emancipation. Chantelle Lee wrote this essay while in her final year of a BA in History at Cambridge University (Sidney Sussex College). She has now graduated from Oxford University (Mansfield College) with an MSt in US History. Notes: [1] James McPherson, Battle Cry of Freedom: The Civil War Era (Oxford: Oxford University Press, 1998), p.510. [2] Michael Vorenberg, Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment (Cambridge: Cambridge University Press, 2001), pp.46-47. [3] John David Smith, ed., Black Soldiers in Blue: African American Troops in the Civil War Era (Chapel Hill: University of North Carolina Press, 2002), p.22. [4] James McPherson, Battle Cry of Freedom, p.504. [5] Adam I.P. Smith, No Party Now: Politics in the Civil War North (New York: Oxford University Press, 2006), pp.49-50. [6] James McPherson, Battle Cry of Freedom, p.497. [7] James M. McPherson, “Who Freed the Slaves?,” Proceedings of the American Philosophical Society 139, no. 1 (1995), p.5 [8] Ibid. [9] John David Smith, ed., Black Soldiers in Blue, p.xiii. [10] Michael Vorenberg, Final Freedom, p.1. [11] Adam I.P. Smith, The American Civil War, (Houndmills, Basingstoke, Hampshire : Palgrave Macmillan, 2007) [12] John David Smith, ed., Black Soldiers in Blue, pp.39-46. [13] Ibid., p.42. [14] John David Smith, ed., Black Soldiers in Blue, p.xiii. [15] Adam I.P. Smith, The American Civil War, p.93. [16] Michael Vorenberg, Final Freedom, p.37. [17] Adam I.P. Smith, The American Civil War, p.102. [18] John David Smith, Black Soldiers in Blue, p.28.
- The influence of gender relations in the South on the ideology of slavery between 1670 and 1865
The ideology of slavery, defined in this essay as the notion of complete ownership and mastery of African American slaves that centred around slaveholder identity, shaped the social hierarchy upon which white liberty and security could be built. Due to the obsession with conceptions of manhood, honour and power only achieved through a maintenance of total control over everything the slaveholder owned, ultimately the relationship between African American women in particular and white men was defined by the upmost violence and subjugation. Although white women were regarded in a similar subordinate manner as black women on account of their gender, issues of identity, namely that “if [the African American woman] is rescued from the myth of the Negro, the myth of woman traps her. If she escapes the myth of woman, the myth of the Negro still ensnares her,”[1] meant that deep-rooted patriarchal views of gender solidified slavery’s place within society. Above all, the factor influencing both gender relations and the ideology of slavery in the South was the conception of masculinity and resulting ideas of mastery and honour. Wood claims that “the linkage of race and class in white gender identity lies at the heart of mastery and honour.”[2] Nowhere more can attitudes to masculinity and its consequent effect on breeding and up-keeping the institution of slavery be evidenced through the indoctrination of young white men. “No less intense than the influence of slavery was the parental insistence upon early signs of aggressiveness, demanded by notions of white master-hood, before the child met the outside world at school. The male child was under special obligation to prove early virility.”[3] The fact that white men were taught from such a young age the values of honour which were so vital to ensuring slavery’s success demonstrates contemporary views of masculinity and, by implication, the conceptions of women as the opposite. Wyatt-Brown argues that “without such a concept of white liberty, slavery would have scarcely lasted a moment.”[4] This is captured through William Byrd II’s obsession with the complete control of his plantations, family, and slaves - his diaries provide a constant commentary on the struggle to keep control of his assets. Due to both masculinity and honour being so intrinsically linked in the 18th and 19th century South, this meant that for African American slave men, punishment and expectations needed to conform to the white man’s notion of masculinity. Left to do hard labour, the majority of male slaves were in the fields for 12-16 hours per day picking cotton or harvesting sugar cane. Although some women did partake in this work, much of it was undertaken by men as a result of conceptions of gender and the idea that men should be responsible for the physical aspects of labour. In this way, Southern ideals of manhood were translated into the everyday running of slavery. The slave woman was at the very bottom of the societal hierarchy, due to the impossibility of her escaping her identity. For the slave woman, whose narrative is characterised as “infantile, irresponsible, submissive, and promiscuous,”[5] gender relations reinforced her position as the most inferior member of society. As such, the aforementioned white man’s obsession with total mastery of property extended to the black woman’s entire being, resulting in horrific acts exacted upon them. Camp argues that “in many instances female gender seems to have served as a license for planters’ full expression of violent rage, exposing women to cruel punishment more consistently than men,”[6] which, when coupled with the idea that married women “could not depend on their husbands for protection against whippings or sexual exploitation”[7] due to a resulting punishment for the husband, illustrates that black women provide the perfect example as to how the ideology of slavery and notions of masculine mastery were implemented into everyday life. Linking to this idea of the master’s violence against women was the idea of amalgamation or miscegenation. Proslavery writers adopted this term used for inter-racial mixing in arguing that emancipation would lead to the degradation of the white race, accusing abolitionists of wanting sexual access to black people. Not only did this feed into the white conception that black women were driven by sexual desire, but the irony is astounding considering the corporeal exploitation masters exhibited over their female slaves. The fact that by the early 20th century almost all states had passed anti-miscegenation laws proves that attitudes towards gender relations helped in protecting the institution of slavery. Yet the most common image of the slave woman is that of the ‘mammy’. Responsible for raising the master’s children and running the household when the family did not have time to take care of their own children, the Mammy was integral in the maintenance of the household. This consequently resulted in more constant surveillance than field slaves, showing that gender relations forced African American women into a life almost double the subjugation – being a slave and a woman in the South. As a result of gender relations and its help in reinforcing the ideology of slavery, resistance among slaves is also essential in understanding the experiences of African Americans women in particular. Among absentees, women partook much more frequently than men, but were not able to completely flee to the free North on account of their gender. Camp reinforces this by highlighting the woman’s significance within the community and family, stating that “many women understood themselves as persons in terms deeply connected to community; and they identified as women in part through their activities on behalf of their families,” and that “community sanctions against women abandoning their children normalised female dedication to the family, and were another pressure that limited the number of women who could escape to the North.”[8] This demonstrates that escape from the cruel sphere in which they lived and worked was more difficult for women than for men, fuelling the white slaveholder’s control over both gender and slavery and corroborating the inescapability of black female identity. If the gender stereotype for the slave woman was one of promiscuity and lasciviousness, the white mistress was the antithesis. Presented as delicate and even colloquialised as the ‘Southern Belle’, she helped in amplifying the divide between white and black women. Despite being in a subordinate position to the man, the white woman still exercised considerable power in the domestic sphere. However, this narrative of the delicate white mistress was entirely undermined upon the onset of Civil War. With war changing the makeup of Southern society to the extent that roughly one million Southern men fought in the four years of war, white wives of slaveholders suddenly gained unwanted power in maintaining and upholding the institution of slavery. Faust claims that “slavery’s survival depended less on sweeping dictates of state policy than on tens of thousands of individual acts of personal domination exercised by particular masters over particular slaves,”[9] and this is best demonstrated through white female experiences during the Civil War. With a lack of male presence on the estate, popular fears of slave rebellion increased dramatically, especially in the wake of Nat Turner’s rebellion, which combined with the supposed vulnerability of white mistresses. Confederate leaders were “uneasy about the transfer of such responsibility to women”[10] for this very reason, with Mrs. A. Ingraham of Mississippi writing, “I fear the blacks more than I do the Yankees.”[11] These Confederate fears are exemplified through the way in which white women would punish their slaves. Typified by passion and impulse, elite women would often hit slaves instead of the traditional whippings, but “such behaviour was the antithesis of the orderly lashings that male managers idealised. True manly mastery exhibited control, not passion; honour was not satisfied by the meting out of vindictive beatings to social inferiors.”[12] As a result, therefore, it can be determined that the entire ideology of slavery, which was built upon the back of mastery, domination and patriarchal control was being undermined by women because of their gender. Because slavery could not survive without this patriarchal, masculine domination, ultimately the onset of Civil War can hence be seen to expose the flaws regarding the relationship between the genders which had not previously been addressed. What is also significant to note is the legacy of the relationship between gender and slavery in the South. Even after Reconstruction, thousands of American children born after slavery but indoctrinated with previous ideals of white societal and racial domination, “took up the cause and reconstituted it on new ground,” with the white home continuing to be where “white women would continue to be ‘ladies’ and managers of domestic spaces, both white and black.”[13] This suggests that Southern attitudes of gender were so ingrained and important within slave culture that even after its abolition during the Reconstruction era there was such a desire to romanticise and rebuild the hierarchy which promised white supremacy. Although gender relations did significantly contribute to the shaping of slavery in the South, in the North the notion of the ‘cult of domesticity’ (the description of the home as a feminine space) was essentially an identical idea to the role of the domestic southern female slave; only with white women associating this with being independent and free whilst still conforming to the traditional gender power dynamic defined by female inferiority. The cult of domesticity supported the idea that women were in charge of the moral and spiritual development of their families, yet this is all too reminiscent of the ‘mammy’ - the phrase ‘women’s work’ “conjures the domestic […] and inevitably leans in the direction of the family,” as “images of enslaved female house servants tend to populate the collective imaginary.”[14] Hence it is justified to distinguish the two merely through the idea that public and private life in the free North were separate; but in the slaveholding South they were seen to be both spheres of labour. This suggests that race, not gender, was the overarching and most important factor shaping the ideology of slavery in the South, which is also corroborated through the status of poor white men of the South. For these men who generally lived on poorer agricultural lands, also known pejoratively as ‘hillbillies’ or ‘crackers,’ slavery and racism had a powerful appeal because it automatically elevated them within the social hierarchy, meaning there was a limit as to how far they could fall in society. By 1860, enslaved people totalled 40% of the South’s population, ultimately meaning that poor whites were already within the top 60%, perhaps one reason as to why there had not been a similar revolt to Bacon’s Rebellion since 1676. Thus, although gender relations certainly played a role in these men’s lives, it was conceptions of race that defined and shaped both their identity and the ideology of slavery most profoundly. Therefore, despite gender relations influencing the ideology of slavery, above all issues of race remained of the upmost importance, to the point where northern women seemed to be living a similar life to the domestic slave woman, separated only by their race and the guise of slavery. Therefore, gender relations significantly shaped the ideology of slavery in the South through the idealisation of the dominant white man and all other members of society underneath him. In contrast to the North, which held women’s rights conventions like at Seneca Falls in 1848, Southern conceptions of gender only sought to maintain a patriarchal society fuelled by slavery. Yet while race remained the most crucial factor in justification of slavery, ultimately the importance of gender relations can be examined with the onset of Civil War, in particular how the patriarchal society starts to collapse when white men attempt to dominate a society in which they are heavily outnumbered – even by 1710, African Americans in Carolina outnumbered whites 2:1. Therefore, gender relations highlighted the deepest fears of white slaveholders which eventually became exposed in 1861. Matthew Ainsby is currently in his first year of a BA in History and German at Durham University (University College). Full question when assigned: To what extent did gender relations in the South shape the ideology of slavery between 1670 and 1865? Notes: [1] Deborah Gray White, Ar’n’t I a Woman?: Female Slaves in the Plantation South (New York, 1999), p. 28. [2] Kirsten E. Wood, ‘Gender and Slavery’ in Smith and Paquette (eds.), The Oxford Handbook of Slavery in the Americas (Oxford, 2010), p. 525. [3] Bertram Wyatt-Brown, Southern Honor: Ethics and Behavior in the Old South (Oxford, 1984), p. 154. [4] Ibid., p. 371. [5] White, Ar’n’t I a Woman?, p. 27. [6] Stephanie M. H. Camp, ‘I Could Not Stay There’: Enslaved Women, Truancy and the Geography of Everyday Forms of Resistance in the Antebellum Plantation South, Slavery & Abolition (University of North Carolina, 2002), p. 13. [7] White, Ar’n’t I a Woman?, p. 153. [8] Ibid., pp. 3-4. [9] Faust, Drew, Mothers of Invention: Women of the Slaveholding South in the American Civil War (University of North Carolina, 1996), pp. 53-54. [10] Ibid., p. 54. [11] William F. Pinar, ‘The Gendered Civil War in the South’, Counterpoints 163 (2001), p. 245. [12] Stephanie Camp, Closer to Freedom: Enslaved Women and Everyday Resistance in the Plantation South (University of North Carolina, 2004), p. 132. [13] Thavolia Glymph, Out of the House of Bondage: The Transformation of the Plantation Household (Cambridge, 2008), p. 20. [14] Jennifer Morgan, Laboring Women: Reproduction and Gender in New World Slavery (University of Pennsylvania, 2004), p. 145.












