Search Results
59 results found with an empty search
- What did Early European Modern rebels aim to achieve?
The most common historiographical stance regarding early modern revolts is that rebels were motivated by ‘blind rage’ and their actions were merely reactive responses to grievances brought on by a backdrop of climate crises, economic hardship and religious affront. However, while early modern rebels did require a trigger to enact protest, they actually aimed to achieve political negotiation through their actions – due to the lack of widespread lower-class representation coupled with the expansion of early modern monarchies “over new territories that often were ethnically quite different from the rest of their realms,”[1] mass crowd revolt was necessary for significant change to be affected. Through an examination of the Naples Revolt of 1647 and other contemporary European rebellions, this essay will argue that although economic and religious grievances were causes for the outbreak of riots, ultimately they can be seen to be factors leading to an organisation and unity of the crowd and are thus mediums through which rebels attempted to further their political and social organisation. The revolt of Naples in 1647 is totemic of early modern urban revolt. With the city being ruled by a Spanish viceroy and the rumour of the gabelle (salt tax) to be soon implemented, the preconditions provided the perfect mixture of factors required for popular revolt. Although initially seeming a disorganised mob, the revolt quickly became unified under Masaniello, leading to a successful burning of the palace of the tax collector. It is interesting to note how the rebels attempted to legitimate their movement through their use of religious doctrine - they believed that the Virgin Mary and other saints were on their side, painting the bourgeoisie in a pejorative light. Through this, along with the clear contempt at an increase in tax, Mousnier’s idea of a ‘vertical’ revolt (i.e., one going against the hierarchical idea of state domination) is proven. Briggs summarises Mousnier’s position, stating that he saw early modern revolts “as a reaction against the expansion of centralised royal power,”[2] and this is evidenced in the 1647 revolt as the rebels sought to use economic and religious injustices to attempt to negotiate politically. Furthermore, the success of the revolt can be evidenced through the creation of the Neapolitan Republic in November 1647, intended to secure autonomy from Spanish rule, hence giving more support to Mousnier’s position. In this way, therefore, although historiographical accounts tend to focus on the preconditions of revolt to be “social and economic grievances” brought on by “rising food prices, dearth and taxes,”[3] and hence the aims of the rebels to be merely to quell such problems; instead, what is more convincing is that rebellion was a form of political negotiation which required mass cohesion and unity of the crowd to elevate and bring to the fore a protection of peasant desires, ultimately culminating in achieving a successful implementation of rebel aspirations. Davis defines religious riot in sixteenth-century Europe as “any violent action, with words or weapons, undertaken against religious targets by people who were not acting officially and formally as agents of political and ecclesiastical authority.”[4] This definition suggests that early modern rebels sought to protect their sacredness of their religious and desired to keep it ‘pure’ above all else. In France during this period, Davis also argues that the main goal of religious violence was “the defence of true doctrine and the refutation of false doctrine” combined with ridding the community of “pollution.”[5] This is evidenced through Catholic crowds in Angers throwing a French bible into the river; alongside Protestant masses rioting against the Catholic conceptions of the altar rail, which meant the only way to get to the cross was through association with a priest, in contrast to the only requirement in Protestant churches to connect with God being through the Bible. Hence, the defence of religious doctrine was the main aim of early modern rebels. However, she fails to consider the broader aims of early modern rebels, instead choosing to focus on instances of iconoclasm and riot between Catholics and Protestants in France. While the protection of religious doctrine was a significant aim, what is more convincing is to argue that this goal was a stepping stone for rebels to then express political discontent. The German Peasants’ War of 1525 provides a perfect example of this. Brought about in the wake of the Reformation, farmers and peasants were inspired by Luther’s Protestant theology, in particular the correct use of tithes, the refusal to render of which becoming “widespread during 1523 and 1524, the very years when peasant indebtedness mounted because of stringent collection of rents and taxes at a time of bad harvests”[6]. Yet Cohn goes further in highlighting the political negotiation of a rejection of serfdom amongst the peasants, stating that there emerged “political conflict between a well-established tradition of peasant self-government and the growing power of the German territorial states.”[7] The growing popular tide of anti-serfdom throughout the nation could only have been brought to the fore having had inspiration from a portrayal of religious grievances stemming from the Reformation, and therefore the revolt provided the perfect opportunity for the peasants to declare autonomy from a centralised state despite its ultimate failure. Therefore, when comparing this with the Revolt of Masaniello in Naples in 1647, in which the crowd believed that the saints and the Virgin of the Carmine were providing religious justification to their movement, it is evident that religious grievances, although important, were the perfect opportunity at which to express a united political discontent. Marxist historiographical thought argues that rises in food prices and food shortages caused revolt, and that early modern rebels did so in order to survive. This is displayed in the riots in the French town of Agen in June 1635 as a result of the rumour of an imposition of the gabelle. The town councillors highlight the power of the mob, emphasising the range and scale of influence through a particularly graphic and vivid account of those of the upper-class who were killed, stating that the mob “killed and butchered [the sieur d’Espalais] on the spot.”[8] Although the source must be handled with caution on account of its perspective (the rebels protesting against the councillors provides an inherent conflict), the source is still useful in presenting the extent of displeasure which the rioters express, showing how serious the prospect of economic ruin was in the collective mind of early modern peasants. Yet despite economic affrontery contributing to early modern discontent, once more the defence of community and the notion of political representation was more appealing to rebels. As shown in Bordeaux in 1675, French taxpayers protested in light of the king’s imposition of numerous taxes on the sale of tobacco, stamped paper and juridical business. Culminating in numerous authority deaths, corpse mutilations, and eventually forcing “a package of demands...effecting a broad range of changes,”[9] such as the exiling of Parlement from the city until 1690 and the implementation of shared peasant demands, the revolt provides a perfect example of how a voice was given to long-standing political grievances under the guise of economic distress. In this way, the similarities between this and the Naples revolt are clear: urban protestors using the prospect of economic despotism to go further in expressing their united political and social agenda. Thus, although Marxist historiography is not wrong in its belief that early modern rebels aimed to achieved economic security, it is limited in its extent – it refuses to deal with the broader political aims which had been brewing amongst the popular community, only now displayed when economic grievances can be used as justification. What is interesting to note is that the idea of self-protection and a defence of peasant interests is a common factor which runs throughout different types of early modern revolts - in the case of both urban and rural protest (and sometimes even civil war), rebels continually used religious and economic factors as mediums through which to politically negotiate. The Naples Revolt, the German Peasants’ War and the riots in Agen all took place in different geographical locations (Naples in the urban environment, Peasants’ War in the countryside and Agen in the town), yet all demonstrate similar overarching aims of illustrating popular discontent in an attempt to maintain traditional peasant ideas of economic and social security. Bercé sheds light on the rebellion by the Tard Avisés (the nickname ascribed to the peasants of the movement) in 1593 in the wake of the French Wars of Religion of 1589. Despite there being minimal religious factors contributing to the outbreak of protest, the increasing taxation as a result of the costly wars in the previous decade proved too constrictive for the peasants and is commonly perceived to be its cause. Bercé states that “waves of revolt had rolled across the country from the […] Limousin-Périgord region and were spreading south into Agenais, Quercy, and even Gascony,”[10] which seeks to support this accepted view of popular discontent as a result of economic constraints. Yet what is more convincing is that there was a more basic aim at the heart of this rebellion due to civil war; namely that the growing power of those implementing the taxation sought to reduce the peasant population to “slavery,” whilst also “snatching away their rustic liberties, encroaching on their lands, and shattering the old cohesion of their small communities.”[11]This proves that at the core of revolts from civil wars, alongside urban and rural revolts, a shared, cohesive idea of preservation and protection of tradition and community was actually the intended aim of early modern rebels, with economic and religious grievances thus acting as foundations upon which discontent could be presented. It is therefore significant to note that, although many early modern rebels across Europe were perceived to be simply reactive, disorganised mobs, they all had a shared idea of self-preservation and aimed to protect their established livelihoods amid a turbulent climate of global crisis. As previously mentioned with regards to the Agen source, this ‘blind fury’ narrative could perhaps be as a result of the perspective of the writers of the timelines of events - the inherent conflict between literate councillors and disgruntled peasants is certain to shift the focus in favour of the former. After all, Farge corroborates this view in claiming that such documenters “took advantage of social discontent so as to launch their gross and ill-intentioned products,”[12] and hence their social prestige. Therefore, it is clear that the primary goal which early modern rebels aimed to achieve was the protection of their own livelihood through the only effective and viable way of political negotiation – revolt. Although precipitated by economic and religious trigger, these simply acted as justifiable issues upon which they could convey their distaste and attempt to settle their own political matters, the same matters which the early modern state did little to represent. It is also intriguing when considering the similarities between Western European revolt and Eastern rebellion – early modern Ottoman riots such as the Celali Revolt prove that this trend of ‘global crisis’ was not limited to Western Europe. White states that “it is no longer tenable to blame the empire’s troubles of the 1600s simply on the decay of old institutions or the challenges of a rising Europe.”[13] Especially with regards to the Celali Rebellion, where gangs of bandits coalesced into one united force as a result of famine caused by the Little Ice Age crisis, it is apparent that Ottoman despotism was actually taken down from within, contradicting the traditional perception of corruption as the Empire’s downfall. Thus, early modern rebels across Europe appear to have had similar motivations for revolt. Matthew Ainsby has just completed his first year of a BA in History and German at Durham University (University College). Notes: [1] Julius R. Ruff,, ‘Riots, Rebellions and Revolutions in Europe’, In Robert Antony, Stuart Carroll, Caroline Dodds Pennock (eds.), The Cambridge World History of Violence (Cambridge, 2020), p. 473. [2] Robin Briggs, ‘Peasant Revolt in its Social Context’ in Briggs, Communities of Belief: Cultural and Social Tension in Early Modern France (Oxford, 1989), p. 2. [3] Peter Burke, ‘The Virgin of the Carmine and the Revolt of Masaniello’, Past & Present, Vol. 99 (1983), p. 4. [4] Natalie Zemon Davis, Society and Culture in Early Modern France: Eight Essays (London, 1975), p. 153. [5] Ibid., p. 156. [6] Henry J. Cohn, ‘Anticlericalism in the German Peasants’ War 1525’, Past & Present, Vol. 83 (1979), p. 7. [7] Ibid., p. 14. [8] Richard Bonney, [trans.] (ed)., ‘Description, by the town councillors of Agen, of the sedition which arrived in the town on 17 June 1635’, Society and Government in France under Richelieu and Mazarin, 1624-61 (London, 1988), p. 203. [9] William Beik, Urban Protest in Seventeenth-Century France: The Culture of Retribution (Cambridge, 1997), p. 157. [10] Yves-Marie Bercé, Revolt and Revolution in Early Modern Europe (Manchester, 1988), p. 105. [11] Ibid., p. 106. [12] Arlette Farge, Subversive Words: Public Opinion in Eighteenth-Century France (Cambridge, 2004), p. 34. [13] Sam White, The Climate of Rebellion in the Early Modern Ottoman Empire (New York, 2011), p. 298
- Fascism and the Ontology of Man
Fascism proves to be a term that has been stretched in every direction. In doing so, the accommodation of who constitutes a “fascist” has been extended to varying demographics. Today, the term fascist is often used to refer to individuals inclined towards authoritarianism; the far-right are fascists, xenophobic nationalists are fascists, even religious fanatics are fascists. Does this contemporary template use of fascism help us understand what underlines the initial inclination towards fascist ideology? Historically speaking, there were only two fascist regimes that achieved any tangible sense of political power: Fascist Italy and Nazi Germany. It is true that there remains debate around if it is possible to consider other regimes in this bracket – Franco’s Spain comes to mind – but that is a conversation for another day. Further, there is also discussion about what the structure of power is and why it matters for definition. As the American political scientist Robert O. Paxton has argued in what he refers to as an essay of fascist doctrine, the reason why these two regimes are so commonly used in fascist discourse is because of their relationship to power.[1] Paxton explains in his Anatomy of Fascism that it is difficult to determine the agency of the various other fascist organisations in the interwar period (1918-1939) due to their absence of political hegemony.[2] This is not wrong, and Paxton’s observations on these two regimes are essential for gathering a sense of how a fascist organisation maintains power – albeit short-lived – when it is in the respective position to do so. However, as the societal associations with fascism have shown, fascist discourse and identity is not necessarily restricted to tangible power. Our own openness – whether it is well measured or incendiary – to using fascism as a language invites a much-needed inspection of fascism as an existence. I started my thesis with an insatiable itch to delve into recess of the human inclination towards fanaticism. Further, I wanted this lens to fall over masculinity primarily because of man’s predisposition towards extremity and the way in which this has been unfortunately oversimplified. Initially, I proposed the idea to examine the fanatical manifestations in art and propaganda in both communist and fascist doctrines within interwar Europe. To no surprise this scope was out of the question of a PhD programme as the sheer size of it would better suit a ten-year period of research, not a meagre three. However, the post has not been abandoned entirely. After revision and scrutiny, a distinct focus was put on fascism, yet in order to derive as much as possible from relevant source material, a case study was identified in British fascism. In all their mongrelised outfits – The British Union of Fascists (BUF), The Imperial Fascist League (IFL), The British Fascist(i) (BF), The Nordic League, The Link, The National Socialist League… and so on – the cohorts of British fascists all had at their root particularly strong relationships with the concepts of this article: rebirth and renewal. Before diving into the pertinent relationship between fascism and rebirth, it is critical to briefly outline some of the working definitions scholars have to hand. Having these for reference will grant permissibility to discuss the article’s content matter purely in the context of fascism. Further, the definitions show how ingrained the concepts of rebirth and renewal are in fascist discourse. The premeditations and subconscious ruminations that lead to an adoption of a fascist typology have come to feature in the transformation of definition. Definitions: Two of the earlier models of what constitutes fascism come from the schools of totalitarianism and Marxism. In Hannah Arendt’s seminal work, The Origins of Totalitarianism she comes to draw the distinction between the authoritarian fascist (Fascist Italy in this case) and the totalitarian fascist (Nazi Germany).[3] Arendt acknowledges that the use of the term totalitarianism had to be somewhat revised due to Mussolini’s own symbolic use of the term. Fascism, she argues, was authoritarian in its praxis and had strong emphasis on the dictatorial character.[4] However, this did not necessarily mean it was totalitarian, which Arendt identifies solely in Nazism and Stalinism.[5] The key difference for Arendt was the way in which power was atomised within an acute group of individuals that formed a novel government.[6] The totalitarian model was then used to coerce a select population to inadvertently commit to the process of oppression that transcends the individual, eventually leading to the collective desire for world domination. Similarly, Marxist definitions of fascism identify the concentration of power within a genus of individuals who also rely upon the coercion of a population. However, in contrast to the totalitarian model, the Marxist model of fascism is predicated upon the forceful acquisition of wealth and material. In the eyes of the Marxist historian, the fascist was the result of unremittent capitalist production. As Lewis Young and Roger Griffin highlight, fascism was anti-revolutionary in its nature and took its measures against a working-class population.[7] One text that is often cited to corroborate this underlying reason is Alfred Rosenburg’s The Myth of the 20th Century, in which he proposed a solution to the decadent cycles of Marxist revolutionary behaviour.[8] After the seizure of power, the fascists would maintain their position through nepotistic relationships with financial benefactors and aristocratic elites. In doing so, the fascist protected their existence from the emergence of reactionary class conflicts. These two early theoretical variants have been thoroughly challenged – and reappropriated, which is particularly the case for Marxist definitions of fascism – in academia. It is not the intention of this article to hoist another criticism against them. Rather, the purpose is to discuss how the fascist typology developed in order to accommodate the constructs of rebirth and renewal. Moreover, it must be stated that these are not the two sole progenitors of a definition for fascism.[9] As tempting as it is to include further examples, the focus must remain on arriving where the definition is today. It is evident from the two models of fascism discussed above that there were glaring limitations in what could be constituted as being fascist. With such definitions, the conversation of fascism was restricted to a binary: a discussion of the manifestation of power, and the discussion of the maintenance and use of power. However, as with anything worth niggling over, a conversation began about the character of fascism, the individual involved, and the existence of fascism as an entity separate from material and political constructs. One good example of the broadening of definition comes from Stanley G. Payne, a historian who has had his eye primarily over Spanish Falangism in the 1930s and beyond. In one of his later works A History of Fascism, Payne provides the reader with something that mirrors an anthology. From the findings made in his previous research and their contextualisation within the purpose of this text, Payne compiles a list of what comprises the fascist identity.[10] In doing so, the composites listed above that concern the individual are considered with greater attention. Aspects such as the physical spectacle of fascism, the inclination towards collective violence, and fascism’s portrayal of masculinity as a spiritually deprived shell are factored into Payne’s generic discourse on fascism. In the same vein, the more recent speculation over the definition of fascism takes into consideration more nuanced paradigms. The broadening of how fascism is approached as a theory and ideology has meant its application to multiple schools of scholarship. Eliminationism and genocide, transnationalism, and gender studies are just some of the schools that have integrated a receptive model of a fascist definition in order to permit its use in whichever respective context.[11] With such flexibility comes a consequence; there remains absence of something definitive. However, all is not lost, and the historian Roger Griffin has drawn on a wealth of perspective to arrive at what is one of the most well-received definitions of fascism. His analysis of fascism acknowledges it as an ideology and as an existence.[12] The definition that Griffin has landed upon over decades of research and collaboration breathes even more life into the subject because it permits exploration of the phenomenological nuances that exist with fascism. Griffin argues that fascism at its root is a: ‘Genus of political ideology whose mythic core in its various permutations is a palingenetic form of populist ultra-nationalism’[13] Within this definition is the true promise of this article. Of course, the development of the definition on fascism still roars on, and rightly so. With each new development and paradigm comes a new discussion. The reason why Roger Griffin’s definition on fascism is so essential for this article is twofold. Firstly, it is the evidence of a shifting thought on consciousness and practice. From Griffin’s definition, the discussion steps out of the limited realm of power and into a place of ontology. Fascism is not arbitrary, and it is rooted in human reason and existence. This is a quite striking realisation as it forces one to contend with the existence of fascism within themselves. Secondly, the subtle extrapolations that Griffin makes in his definition lead historians further down the road of asking why. What fascism looks like and what its practices are has been well covered – Paxton is a clear example of this. However, why it manifests as it does and why it remains to be such a conflicting entity still prove to be labyrinths of explanation. There is one burning word in Griffin’s definition that invites the possibility of examining the questions of why: palingenetic. Palingenesis – Rebirth and Renewal: Palingenesis typically refers to a process of reincarnation. If something is a ‘palingenetic form’ then it is a model of this process of rebirth. As discussed above, the psychological ingredients that underpin fascism as an ideology have been factored more so into the historiography. However, in consideration of British fascism, only the surface has been scratched. It is evident from the research that good analysis has been drawn on the physical manifestation of the ideology, and this is particularly true regarding men.[14] However, if there is to be clarification on what the predispositions are in masculinity towards the allure of fascism, then there is also the need to apply the term palingenesis further to our analyses. The palingenetic antidote of the fascist ideology impinged distinctly upon four psychological precepts: trauma, addiction, the proximity to death and suicide, and sexuality. Once again, there is the urge to claw at these four precepts separately, but it proves more purposeful for this paper to consult them as a composite clause. What mutilation and chasm The Great War (1914-1918) left behind has been well documented and remains – for now – intact as a societal memory. The ways in which the impact of WW1 is approached in academia has shifted considerably. A debt of understanding is owed to the work done by psychologists and historians regarding the resultant trauma of the Vietnam War (1955-1975). What was referred to as malingering, cowardice, and symptomatic neuroses during WW1 were re-examined under the newly created framework of Post-Traumatic Stress Disorder (PTSD). Further, there was a conceptual revision of the relation between the severity of war and the unconscious neuralgia caused from it. The clinical and empirical observations made on subconsciousness and unconsciousness were not new roads. During the interwar period the well-known Vienna Psychoanalytic Society was one of the first organisations to analyse the relationships between the composite clause mentioned above and war. Within this small circle that never exceeded past 150 members, the likes of Sigmund and Wilhelm Reich to name just a few, drew causal links between the dispositions present within man and his proclivity towards extreme ideology.[15] The findings made by the Viennese school, particularly on sexuality and addiction, proved to be uncomfortable, yet today psychoanalysis and the relation a human has with their own layers of consciousness is relatively well accepted. Like their European fascist counterparts, the progenitors of British fascism sought out answers to their own psychological ruminations. Most of them were involved in war in a variety of ways: some acted as medical surgeons on the western front (Arnold Leese, Robert Forgan); some served in British regiments in South Africa (AK Chesterton, Henry Hamilton Beamish); whilst others volunteered in auxiliary positions (Mary Sophia Allen, Rotha Lintorn-Orman). Even for those who had no participation in WW1 still had a direct experience with tangible trauma. An example of this relationship with trauma stems from the suffragette Mary Richardson who became the BUF’s chief organiser of the women’s section of the party in 1934. In contrast to many of her fascist compatriots, she had not served in combat, but Richardson had a distinct experience with subjugation. During her hunger strike at HMP Holloway, she recalled in a statement a personal horror of forced-feeding and chemical torture.[16] In the letter Richardson displayed the immediate existential affinity the trauma had on her memory and agency. She wrote: ‘Sleeplessness is an accompaniment of the hunger strike, but more especially of forcible feeding, when one suffers from horrible nightmares and this in spite of the fact that medicines containing drugs to quiet the nerves are administered… It is therefore more than a wish or desire, it is an entreaty from me that you will stop this prison torture before this last stage of satanic statecraft is reached; this last fiendish element added to the torture of suffragist prisoners.’[17] In the difficult search for meaning and reason – both conscious and unconscious – heads turned towards the allure of fascism. The offer of a possible rebirth away from the embedded memories of horror and torture held a distinct presence in the ideology. However, despite the lofty promises made by the fascist utopia, the psychopathologies could not be shaken, and with it came collective motifs commonly associated with fascism. Eliminationism, ethnocentrism, nationhood, and the volition over the outsider in the physical realm supplanted the individual existential ambiguities that so many members of the British fascist discourse had. For man, the appeal of the fascist ideology during the interwar period was even more potent precisely because of the allure of rebirth. In an era where modernist design in art and sexuality eroded the ideals of the heroic, man’s meaning looked ever more dislocated. Other than in war and mutilation, the masculine position in existence was perceived to be waning. In addition to the European motifs of the time, the underlying composition of trauma, addiction, death, and sex, pushed him closer to the brink of disillusioning extremity. Despite the memorials dedicated to the hero of The Great War and the remedies offered in marriage, the incessant need for man to find his own meaning to the subconscious horror within his existence pushed him out to the folds of fascism. Use and Theory: To understand fascism is to understand man, and in order to understand the fascist man one must be willing to dive into his ontological apparatus. Many of the studies on masculinity adhere to sociological frameworks that insist upon using the overbearing existence of a patriarchy. In this patriarchy there is a hegemony that is characterised by a masculine oppression and the willing suppression of those who do not conform to the stratifications of male behaviour.[18] These theories are sometimes useful in highlighting how structural power can be maintained, but they are fundamentally difficult to apply to interwar fascist discourse, particularly in the case of British fascism. Firstly, a patriarchal structure is predicated upon the existence of a model where a select group of men maintain power over both women and those not who do not fit the normative niche of masculinity. Moreover, the maintenance of this power structure functions on the heterogeneous approval between male members within the given organisation. Without such a mechanism of approval there remains the potential of external interference from outside agents. Those who were involved with fascism in the interwar period do not slide into this category without struggle. Firstly, the majority of men who were involved in fascist ideology came to face death after a relatively short timeframe since its inception in Europe.[19] Whether this was through involvement in war, suicide, derision, alcoholism, the man who bought into fascist ideology had to contend with reality of death either in themselves or with their associates. This alone undermines the validity of the idea of longevity or maintenance. Secondly, and this is pertinent in British fascism, the involvement of women at the very top of the hierarchical structures does not corroborate with the patriarchal framework often associated with fascism.[20] The principal discussions historians need be having when regarding fascism and masculinity are the transgressions behind the seemingly utopian ideal of male palingenesis and the underlying pathologies that led to such interpretations. It is time for historians and scholars to look at masculinity with new eyes. The sociological models hitherto used offer minimal for our understanding on male existence and its embroilment with extremity. In the context of fascism, the allures of rebirth and renewal in the distinct form of palingenesis appealed to man’s psychological conflicts. In a world perceived to be full of societal dislocation, the fascist identity looked to be a plausible answer. However, man carried with him his dark disturbances. Maybe if there is an opportunity to level with this difficult topic, there will be the chance to understand masculinity, and with that there can truly begin an acknowledgement of the reality of man. Arron Cockell is currently pursuing his PHD at the University of Glasgow, focusing on masculinity, intellectual and societal history, having completed his MA in Modern History at the University of Leeds. Notes: [1] Robert O. Paxton. the Anatomy of Fascism. (New York, 2004) [2] Ibid. [3] Hannah Arendt. The Origins of Totalitarianism (New York, 1951) [4] Ibid. [5] Ibid. [6] Ibid. [7] The two sources to mentioned in reference to the Marxist definition of fascism are: Lewis Young. "Fascism for the British Audience: The Communist Party of Great Britain’s Analysis of Fascism in Theory and Practice." In Fascism 3.2 (2014) 93-116. And Roger Griffin. "Studying Fascism In A Postfascist Age. From New Consensus To New Wave? 1.” In Fascism, 1 (2012) 1-17 [8] Alfred, Rosenburg. The Myth of the 20th Century (1930). The 1937 third edition of Rosenburg’s text can be accessed here: https://tragicallyhip.neocities.org/files/pdf/Alfred%20Rosenberg%20-%20The%20Myth%20of%20the%2020th%20Century.pdf [9] One must also consider works like Theodore Adorno, et al. The Authoritarian Personality (New York, 1950) and George Orwell. Notes on Nationalism (London, 1945) when discussing the origins of definition. [10] Stanley G. Payne. A History of Fascism (London, 1995) [11] Good sources to consult the fascism in eliminationism are: Daniel Jonah Golhagen, Worse than War: Genocide, eliminationism, and the ongoing assault on humanity (London: Hatchette, 2009). Aristotle A. Kallis, and António Costa Pinto. Rethinking Fascism And Dictatorship In Europe (Basingstoke, 2014). Aristotle Kallis. Genocide and fascism: The eliminationist drive in fascist Europe (Oxfordshire, 2008). For Transnationalism: Arnd Bauerkämper. Fascism Without Borders: Transnational Connections and Cooperation Between Movements and Regimes in Europe from 1918 to 1945, (New York, 2017). Constantin Iordachi. Comparative Fascist Studies: New Perspectives (Oxfordshire, 2010). For Gender perspectives on fascism: Claudia Koonz. Mothers in the fatherland: Women, the family and Nazi politics. (Oxfordshire, 2013). Victoria De Grazia. How fascism ruled women: Italy, 1922-1945. (California, 1992) [12] Roger Griffin. The Nature of Fascism (New York, 2013) [13] From: Roger Griffin. The Nature of Fascism (New York, 2013) [14] Sources that have consulted British fascism and gender: Matin Durham. “Gender and the British Union of Fascists.’ In Journal of Contemporary History, 27.3 (1992) 513-529. Martin Durham, Women and fascism (London, 1998). Julie V. Gottlieb. Feminine Fascism: Women in Britain's Fascist Movement, 1923-1945 (London, 2000). [15] The psychoanalyst Wilhelm Reich proposed a theory that fascist identity was rooted in sexual repression within children. He argued that authoritarian family which actively represses the rational sexual drive in children inadvertently creates workable human subjects for authoritarian regimes. See The Mass Psychology of Fascism (New York, 1946). The book was originally published in German in 1933, but was one of the many texts burnt in the Nazi book burnings. [16] Extract of Statement from Mary Richardson, titled “Extract of a statement from Mary Richardson on forcible feeding, 6 February 1914 (Catalogue ref: HO 144/1305/248506)” In National Archives. https://www.nationalarchives.gov.uk/education/resources/suffragettes-on-file/mary-richardson/ [17] Ibid. [18] Sources that discuss the patriarchy and hegemonic masculinity as a construct in relation to extremism: R.W. Connell. Masculinities (Cambridge, 2005). Kathleen Blee. "Where do we go from here? Positioning gender in studies of the far right." Politics, Religion & Ideology 21.4 (2020): 416-431 [19] The first use of fascism as a party template came from Benito Mussolini in 1919. Unlike Communism, the fascist ideology was not as dogmatic, and it held roots in syndicalism, corporatism, and the irrationalist philosophies of the Fin De Siècle. [20] Rotha-Lintorn Orman founded the first British fascist party the ‘British Fascisti’ in 1923. From the conflict within this group came various splinter organisations such as The Nordic League and The Imperial Fascist Party.
- A Gendered Monarchy? How, and to what extent, did gender influence early modern English Monarchy?
The extent to which the ‘gendered’ element or, more simply, gender prejudices shaped society, relative to their theoretical rigidity, is a key focus in the study of early modern England, particularly concerning monarchy given the novelty of queenship. Many feminist historians posit that patriarchal norms dictated how contemporaries viewed monarchy, and that female monarchs were only successful by manipulating this to their advantage.[1]However, to properly assess monarchy, one must appreciate that it has different modes: it is simultaneously a concept, a practice, and an image, the latter drawing upon and manipulating the former two. A narrative that might apply well to one will not necessarily apply to another. This approach allows one to appreciate that while the conceptualisation of monarchy was contested and changing, it maintained an expectation of ‘masculine’ conduct. Practice was, however, different. While female monarchs faced issues Kings would not, both genders were able to exercise power effectively, drawing on divine absolutism. Image nonetheless remained heavily gendered, some representations remaining the preserve of the masculine monarch with others adopted and adapted by Queens regnant. The conception of monarchy was contested, with growing, but not definitive acceptance of both male and female rule. Our understanding of ideological conceptions of monarchy is limited by their periodic appearance, generally revealing themselves at female succession, and by the fact that we are largely exposed to the conceptions of the higher socio-economic stratum. Nevertheless, visible sources ostensibly demonstrate a prevalence of the belief that monarchy should be exclusively masculine. John Knox, long the classic example, castigated women’s rule as “repugnant to nature, contumely to God … the subversion of good order, of all equity and justice.”[2] Nor was he alone, Thomas Becon bewailing: “to take away the empire from a man and give it unto a woman, seemeth to be an evident token of anger toward us Englishmen.”[3] These views were given divine providence by religious tracts such as the Homily on Obedience which denoted a gendered hierarchy of God, Kings, Princes and governors.[4] However, these proclamations must be contextualised. Becon’s true basis was that Mary’s womanhood was a function of corrupt Catholicism.[5] The primacy of the religious factor over gender is clear as by 1564 he was writing of “the most blessed and flourishing reign of this our most gracious lady Queen Elizabeth”.[6] Indeed, most of those hostile to queenship with Mary’s accession to the throne were fervent protestants set against a Catholic monarch.[7] Even Knox was prepared to invoke divine dispensation and accept female rule under Elizabeth[8] – admitting later to Elizabeth that his target had been solely Mary.[9] Moreover, these Protestant polemics were equally countered by many such as Sir Thomas Elyot[10] and Sir Thomas Smith, the latter of which argued that it was ‘bloud and progenie’[11] that really mattered in monarchy. By 1701 this was institutionally recognised with the Act of Settlement prohibiting Catholics on the throne, preferring the Protestant Electress Sophia of Hanover as heir, over male Catholic successors.[12] Indeed, under Queen Anne, Daniel Defoe proclaimed “crowns know no sexes”,[13] while for some, a woman was the ideal parliamentary monarch,[14] as the concept of queenship became less novel and the post-1689 constitutional monarchy suited the publics’ uneasiness with authoritarian women. Nevertheless, one cannot view monarchy in this period as ungendered. Many of the defences of queenship justified it by declaring the suitability of that particular monarch, rather than defending the concept.[15] The widespread preferment for men[16] betrays contemporaries’ deeper patriarchal conceptions of society from which monarchy could not be immune, evidenced by concern over female weakness under Queen Anne.[17] This sentiment revealed itself in the manner in which monarchs were expected to rule – keeping to supposedly masculine traits such as rationality, moderation, and sobriety.[18] Monarchs of both sexes were commonly praised for their manly behaviour. Elizabeth was praised by John Foxe for “her princely qualities” while the fact she had “so temperate condition, such mildness of manners” was impressive because of “that sex”, implying that these qualities were viewed as generally alien to women.[19] William likewise was praised for his “manly courage and fury” in military pursuits.[20] Herrup provides sensible qualification for this however, emphasising that monarchs’ actions were not judged by a binary comparison of masculine or feminine traits but rather across a spectrum.[21] A monarch could be too masculine just as they could be too feminine. She evidences this by arguing that it was James I’s obvious masculinity, aggressive and impatient, that brought him initial disrepute in England.[22] However, these categorisations were not equally prominent – a monarch was much more likely to be attacked for effeminacy than masculinity. Indeed, Herrup’s diagnosis of the dislike for James is her only evidence of a monarch being seen as too masculine and is not universally held.[23] There is plentiful evidence of attacks on feminine monarchs – Samuel Pepys decried Charles II’s “horrid effeminacy”[24] while childless male monarchs were ridiculed and, in William III’s case, laden with accusations of cuckoldry and homosexuality.[25] Practice was often more nuanced, with male and female monarchs successfully drawing on their divine right[26] and absoluteness to assert their will.[27] Feminist historians have, alternatively, often stressed the pervasiveness of patriarchal norms, asserting that they critically altered female monarchs’ rule, compared to men’s. This is demonstrated in the supposed politics of courtship in which, at the Elizabethan court, ritual courtship and pretended affection were prerequisites to preferment.[28] However, as Mears has established, the Elizabethan court was more akin to that of male monarchs like Henry VIII than such historians have supposed. Elizabeth’s and Henry’s advisers were chosen on the basis of trust, underpinned by social and familial networks[29] and ideological similarity.[30] Moreover, these advisers were not imposed upon her[31]nor was she restricted by her use of informal ‘probeouleutic’ groups.[32] She utilised these informal consultations as they offered flexibility and privacy, important for sensitive issues such as Mary Stuart[33] and justified her ability to listen though not necessarily heed advice by divinely ordained absolutism.[34] Queen Anne similarly, even in her reduced constitutional role, legitimised it through a belief in a subtle sacral connection.[35] This is not to say gendered notions of monarchy weren’t influential – they crucially shaped the approaches of the (male) counsellors[36] and imposed on Queens regnant issues of a manner male monarchs would not face. Female monarchs could not assume control of their military operations. While Mary I was key in mustering her troops to both seize the throne[37] and defend it during Wyatt’s rebellion,[38] she had to yield control of the battlefield, as did Elizabeth. On marriage too, both Elizabeth and Mary had to contend with counsellors’ attempts to control the process, both continuously petitioning for marriage (as the House of Commons did with Elizabeth in 1559, 1563 and 1566[39]) and trying to dictate whom they marry. In November 1553, when Mary was met by a delegation of some of her most powerful nobles, trying to persuade her to marry within England, she castigated them for the offence.[40] This highlights both the issues that only female monarchs had to face and their responsive assertion of authority. However, female monarchs’ control of their policy-making did not, as Read has suggested in the case of Elizabeth’s courtship of the Duke of Anjou, amount to a manipulation of gender prejudices, utilising dither and delay, to enact political gain.[41] Elizabeth was instead conflicted over personal issues such as concerns over age difference and political ones, notably that the match might bring war with Philip.[42] Nevertheless, by Queen Anne, there were no such manoeuvrings by court, reflecting the growing acceptance of independently-wielded female authority.[43] Thus monarchy could be practised effectively by both men and women, although it was not without any gendered element, with female monarchs facing issues in a manner male monarchs would not. Image, however, remained heavily gendered. Queenship ensured both the appropriation of many previously exclusively ‘masculine’ representations of monarchy and their adaptation to better appreciate pervasive conceptions of ‘proper’ gender roles and characteristics. The appropriation of gendered roles by Queens regnant is evident on Elizabeth’s Great Seal which depicted her mounted in battle array.[44] Mary I, likewise, assumed a kingly vogue in her royal entry into London in August 1553 at which all monarchical precedent was followed[45]. Elizabeth’s image was even more explicit in its subversion of gender norms, popularly styled as judges and kings from the Old Testament including David, Gideon, and Solomon.[46] Herrup contends that just as Queens appropriated the iconography of men, kings did the same with women, with Elizabeth and James presented as Solomon and Mary I and Henry VIII as Judith [47], evidence of royal iconography having ‘transcended gender’.[48] However, Herrup’s assertion rests on limited foundations. While praising Elizabeth by likening her to biblical men is well documented,[49] praising British Kings by likening them to female figures is not. Herrup’s only reference originates from John King who notes only one example and appreciates its exceptionality, stating that “Parker … even compared Henry VIII to Judith.”[50] Moreover, Herrup’s claim fails to account for the ample evidence that monarchical image was not just adopted by Queens but adapted to suit gender norms. Mary I’s coronation, for example, adapted the kingly procession by appropriating the traditions of queen consorts[51] of dress and transport, conveyed through the streets in a litter rather than on horseback under a canopy.[52] In doing so she didn’t present herself solely as a reigning monarch (as a king would) but also as a virgin consort.[53] This duality of asserting monarchy and acquiescing to gender norms continues in Mary’s Great Seal on which one side depicts Mary holding traditional symbols of royal power, the orb and the sceptre, while on the other she rides side-saddle with flowers in the background.[54] This form of presentation continues with Elizabeth I and Queen Anne. The latter followed this queenly mixture of gentility and warrior. She was commonly portrayed as a Deborah,[55] like Mary[56] and Elizabeth,[57] the links to the biblical implying divine right, but also in a maternal fashion such as through the preaching of Isaiah 49:23 which includes “queens [shall be] thy nursing mothers” at her coronation.[58] However, in a demonstration of the gendered nature of image at the time, some monarchical rites remained the preserve of male monarchs, such as tournaments that represented military prowess and chivalric honour.[59] This function was heavily utilised by Prince Philip to exercise symbolic power.[60] Sarah Duncan however contends that Philip undertook ceremonial roles “to represent her (Mary’s) kingly persona”,[61] implying a female appropriation of even the most masculine ceremonies. She argues that Philip simply assumed the role that noblemen such as the Earl of Arundel had undertaken when representing Mary in rituals for which she was precluded due to her sex.[62] However, one might equally contend that Philip undertook these roles in his own right, asserting his own masculinity and kingly status, which a nobleman who could claim no monarchical authority of his own was unable to do. This interpretation better fits with both Duncan’s earlier assertion that Mary presented a “public portrayal of traditional gender roles”[63] and Glenn Richardson’s evidence, of the exclusive gendering of certain rites of kingship.[64] Richardson demonstrates how hunting was integral to Kings’ expression of masculinity, showing prowess in skills required for warfare and functioning as an allegory of male prowess in the pursuit of women.[65] This latter link was made explicitly as Kings would have noblewomen follow them during the chase, ready to meet the King’s sexual desires.[66] In conclusion, the extent to which monarchy was gendered in this period differs according to the mode of monarchy analysed. The theory of monarchy became, over time, more accepting of female rule, though it maintained ‘masculine’ expectations of how that authority would be exercised. In practice, the influence of gender was limited to the approaches of counsellors and certain issues which Queens regnant faced. Largely, monarchical authority turned on divine right and, before 1689, absolutism, rather than gender. Image nevertheless remained heavily gendered with female and male monarchs having shared but also distinct presentations. Anton Higgins is currently in his first-year of a BA in History at Durham University (University College). Full Question when assigned: To what extent and why was monarchy gendered in early modern England? Notes: [1] See, for example, Elizabeth Russell, ‘Mary Tudor and Mr. Jorkins’, Historical Research, Volume. 63 (1990), pp. 263-76 and Allison Heisch, ‘Queen Elizabeth I and the Persistence of Patriarchy’, Feminist Review, No. 4 (1980), pp. 45-56. [2] John Knox, On Rebellion, ed. R. Mason (Cambridge, 1994) cited in J. A. Guy, Tudor Monarchy (London, 1997), p. 93. [3] Thomas Becon, An Humble Supplication unto God for the Restoring of His Holy Word unto the Church of God (1554) cited in Judith M., Richards, ‘“To Promote a Woman to Beare Rule”: Talking of Queens in Mid-Tudor England’, The Sixteenth Century Journal, Vol. 28, No. 1 (Spring 1997), p.115. [4] The First Book of Homilies, ‘An exhortation to obedience’, http://www.anglicanlibrary.org/homilies/bk1hom10.htm (last accessed 14thFebruary 2022) [5] Constance Jordan, ‘Woman’s Rule in Sixteenth-Century British Political Thought’, Renaissance Quarterly, Vol. 40, No. 3 (Autumn 1987), p.430. [6] January 17th 1564, preface to The Early Works of Thomas Becon, ed. John Ayre (Cambridge, 1844), pp. 1-32 cited in Richards, ‘To Promote a Woman’, p. 117. [7] Jane E. A. Dawson, ‘The Two Knoxes: England, Scotland and the 1558 Tracts’, Journal of Ecclesiastical History, Vol. 42 (1991), p. 562. [8] Richards, ‘To Promote a Woman’, p. 116. [9] Dawson, ‘The Two Knoxes’, p. 561. [10] Richards, ‘To Promote a Woman’, p. 107. [11] Sir Thomas Smith, De Republica Anglorum (1583; reprint, ed. Mary Dewar, Cambridge, 1892), pp. 64-5 cited in Richards, ‘To Promote a Woman’, p. 103. [12] Robert Tombs, The English and Their History (London, 2014), pp. 309-10. [13] Hannah Smith, ‘“Last of all the Heavenly Birth”: Queen Anne and Sacral Queenship’, Parliamentary History, Vol. 28, Issue 1 (February 2009), p.146. [14] Matthew Prior, Poems on Several Occasions (1709), p. 292 cited in Smith, ‘Queen Anne and Sacral Queenship’, p. 146. [15] See, for example: Smith, De Republica Anglorum, pp. 64-5 cited in Richards, ‘To Promote a Woman’, p. 103; John Aylmer, An Harborowe for Faithful and Trewe Subjects (Strasburg, 1559), sig. G4 cited in Jordan, ‘Woman’s Rule’, pp. 439-40. [16] Richards, ‘To Promote a Woman’, p. 102. [17] Smith, ‘Queen Anne and Sacral Queenship’, p. 146. [18] Cynthia Herrup, ‘The King’s Two Genders’, Journal of British Studies, Vol. 45, No. 3 (2006), p. 500. [19] John Foxe, Actes and Monuments of these Latter and Perillous Days, Touching Matters of the Church (1563), p. 1711 in A.F. Pollard (ed.), Tudor Tracts 1532-1588 (London, 1903), p. 335. [20] Edward Terry, The Character of His Royal Highness, William Henry, Prince of Orange (London 1689), p. 6 cited in Owen Brittan, ‘The print depiction of King William III’s masculinity’, The Seventeenth Century, Vol. 33, Number. 2 (2018), p. 223. [21] Herrup, ‘The King’s Two Genders’, p. 499. [22] Judith Richards, ‘The English Accession of James VI: ‘National’ Identity, Gender and Personal Monarchy of England,” English Historical Review, Vol. 117 (June 2002), pp. 513-23 cited in Herrup, ‘The King’s Two Genders’, p. 503. [23] Herrup, “The King’s Two Genders’, p. 503. [24] Paul Hammond, ‘The King’s Two Bodies: Representations of Charles II,’ in Jeremy Black (ed.), Culture, Politics and Society in Britain, 1660-1800 (Manchester, 1991), pp. 21-22 cited in Herrup, ‘The King’s Two Genders’, p. 504. [25] Brittan, ‘The print depiction of King William III’s masculinity’, pp. 228-9. [26] Guy, Tudor Monarchy, p. 83. [27] Natalie Mears, Queenship and Political Discourse in the Elizabethan Realms (Cambridge, 2005), p. 88. [28] Guy, Tudor monarchy, p. 90. [29] Mears, Queenship, p. 71. [30] Ibid., p. 65. [31] Ibid., p. 78. [32] A. N. Mclaren, Political culture in the reign of Elizabeth I: queen and commonwealth, 1558-1585 (Cambridge, 1999), pp. 137-43 cited in Mears, Queenship, p. 82. [33] Natalie Mears, ‘The Council’ in Susan Doran and Norman Jones (eds.), The Elizabethan World (Oxford, 2011), p. 65. [34] Guy, Tudor Monarchy, p. 98. [35] Smith, ‘Queen Anne and Sacral Queenship’, p.145. [36] Mears, Queenship, p. 102. [37] Sarah Duncan, Mary I: Gender, Power, and Ceremony in the Reign of England’s First Queen (2012), p. 16. [38] Elizabeth Russell, ‘Mary Tudor and Mr. Jorkins’, Historical Research, Volume. 63 (1990), p. 274. [39] Heisch, ‘The Persistence of Patriarchy’, p. 47. [40] Judith M. Richards, Mary Tudor (Oxford, 2008), p. 147. [41] Conyers Read, Mr Secretary Walsingham and the Policy of Queen Elizabeth (1960), p. 4 cited in Natalie Mears, ‘Love-making and Diplomacy: Elizabeth I and the Anjou Marriage Negotiations, c. 1678-1582’, History, Vol. 86, No. 284 (October, 2001), pp. 442-3. [42] Mears, ‘Love-making and Diplomacy’, pp. 464-5. [43] Tombs, The English and Their History, pp. 309-10. [44] Anna Whitelock, ‘“Woman, Warrior, Queen?” Rethinking Mary and Elizabeth’ in Alice Hunt and Anna Whitelock (eds.), Tudor Queenship: The Reigns of Mary and Elizabeth (2010), p. 174. [45] Duncan, Mary I, p. 19. [46] Susan Doran, ‘Elizabeth I: An Old Testament King’ in Alice Hunt and Anna Whitelock (eds.), Tudor Queenship: The Reigns of Mary and Elizabeth (2010), p. 95. [47] Herrup, ‘The King’s Two Genders’, p. 503. [48] Idem. [49] See, for example: Whitelock, ‘Woman, Warrior, Queen?’, pp. 173-189; John N. King, Tudor Royal Iconography, Literature and Art in an Age of Religious Crisis (1989), p. 254-261. [50] The Exposition and declaration of the Pslame, Deus ultionum Dominus (1539) cited in King, Tudor Royal Iconography, p. 219. “even” is not underlined by King. [51] Judith Richards, “Mary Tudor as ‘Sole Queen’? Gendering Tudor Monarchy,” Historical Journal, Vol. 40, No. 4 (December 1997), pp. 896-902 cited in Duncan, Mary I, p. 25. [52] William Jerdan (ed.), Rutland Papers: Original Documents Illustrative of the Court and Times of Henry VII and Henry VIII, (London 1842), pp. 4-6 cited in Duncan, Mary I, p. 25. [53] Duncan, Mary I, p. 26. [54] Whitelock, ‘Woman, Warrior, Queen?’, p. 174. [55] Carol Barash, English Women’s Poetry, 1649-1714: Politics, Community and Linguistic Authority (Oxford 1996), pp. 229-31 cited in Smith, ‘Queen Anne and Sacral Queenship’, p. 148. [56] King, Tudor Royal Iconography, p. 219. [57] Richard Mulcaster, The passage of our most dread Sovereign Lady, Queen Elizabeth, through the City of London to Westminster, the day before her Coronation: Anno. 1558 in Pollard (ed.), Tudor Tracts, pp. 367-92. [58] Joseph Hone, ‘Politicising Praise: Panegyric and the Accession of Queen Anne’, Journal for Eighteenth-Century Studies, Vol. 37, No.2 (2014), p. 154. [59] Paul Hammer, The Polarisation of Politics: The Political Career of Robert Devereux, Second Earl of Essex, 1585-1597, (Cambridge, 1999), pp.55-7, 199-212, 231-4 cited in Mears, ‘Love-making and Diplomacy’, p. 452. [60] Alexander Samson, ‘Power Sharing: The Co-Monarchy of Philip and Mary’ in Alice Hunt and Anna Whitelock (eds.), Tudor Queenship: The Reigns of Mary and Elizabeth (2010), p. 169. [61] Duncan, Mary I, p. 107. ‘her’ is not underlined by Duncan. [62] Idem. [63] Ibid., p. 98. [64] Glenn Richardson, ‘Hunting at the Courts of Francis I and Henry VIII’, The Court Historian, Vol. 18, No.2 (2013), pp. 127-141. [65] Ibid., pp. 127-128. [66] Ibid., p. 128.
- Bodies, Confused: Doing Transgender History
November 2019. I’m sitting distracted in a lecture on late medieval London: half paying attention, half thinking about what I needed to grab from the big Tesco’s on the way back to college. The lecturer suddenly stops talking, switching slides to project a scrawling manuscript onto the screen. “This is one of the most elusive medieval sources I’ve ever studied-”, he states, “-and the first recorded instance of ‘transsexualism’ in England.” Suddenly I’m not thinking about Tesco anymore. I’m gripped by this story - the story of Eleanor Rykener. London, 1395: a woman calling hirself Eleanor Rykener is arrested after being caught committing “that detestable, unmentionable and ignominious vice” of sodomy with Yorkshireman John Britby. Immortalised on one page of manuscript as a creature of lust, lechery and fallen sin, Rykener’s crime was more than sodomy. Hir crime was complicated by an undefinable sex - ‘physically’ male, with a known male alter identity, Rykener was painted as a homosexual man who “dressed up as a woman”. Whilst only mentioned in passing, Rykener has stuck with me for over eighteen months. Something about hir[1] proximity to power and hir camp disregard for contemporary expectations has stayed with me. A rare glimpse of a medieval trans world is alive in my mind, and I cannot extinguish it. Whilst hir crime was a single instance of sodomy with one ‘John Britby’, during interrogation Rykener admits to a litany of further deviances. Ze[2] was taught the ways of ‘prostitution’ by a diverse network of poor women who gave hir women’s clothing - but most surprisingly, Rykener confessed to working for five weeks in Oxford as an embroideress, and had sex with at least nine men "as a woman" during this time. Whilst centered around sex, it’s clear Rykener’s relationship with womanhood was more than just a way to make money; living consistently 'in role' for five weeks speaks to a complex understanding of hirself. Moreover, ze was “brought [to interrogation] in women’s clothing”, implying that Rykener was interrogated in female dress. This complicates an already confusing picture: Rykener is purported to be a male sodomite, yet allowed to present as female whilst the Mayor and Alderman of London interrogate hir. Rykener sat before two state officials, both of whom are disgusted and intrigued by hir existence, and hir calm confidence in denouncing the many who desire hir. It's clear that the Latin scribe noting down the case was confused by hir. Rykener is described as having sex with men “modo muliebri” - in a womanish manner - and men had sex with Rykener “ut cum muliere”, as with a woman. Yet in the same document Rykener is gendered male: Rykener confesses to having sex with several women, and these relations always gender Rykener as male. Ze slept with women as male, and made conscious decisions to change hir gender expression to sleep with men as a woman. It's unclear whether the scribe or Rykener hirself made this distinction. If Rykener used alternate pronouns then this shows conscious awareness of how hir very existence transcended late-medieval understandings; if the scribe did, then this shows confusion. Confusion, perhaps at the crimes or Rykener’s female presentation, and always attempting to define Rykener along terms that they themselves understood. But beyond this, Rykener confuses the epistemic limits of historical research. When hir manuscript was initially unearthed by historian A. H. Thomas in 1925, Thomas deliberately obscured both Rykener’s gender and the accusations of sodomy. He sums up the entire case as: “An examination of two men charged with immorality, one of which implicated several persons, male and female, in religious orders.” Rykener's complexities are whitewashed. More insidiously, the 1995 translation of Rykener’s trial by Boyd & Karras used “[He/Him]” to note when the Latin scribe used pronouns of indeterminate gender, doing so as “the feminine is used only twice [so] it is reasonable and consistent to translate the indeterminate as masculine.” This conscious rewriting of primary sources to omit gender confusion limits to what extent the historian can analyse how these historical actors operated outside the boundaries of sexual dimorphism - by erasing the interchangeability of perception, this constitutes a rewriting of medieval conceptions of gender fluidity. Our understandings of sex, gender and their binary relationship are not transhistorical, and imprinting them onto historical sources can only be limiting. The confusion is more interesting than the certainty. The blurriness and incompleteness of hir story is something paradoxically tangible - recognisable in contemporary experiences of transness. I’m aware this is projection onto a historical figure of which there is one surviving source. The image I have of Rykener in my head is romanticised and ahistorical - but I can’t bring myself to think of that as a problem. As a trans person, I cling to whatever of ‘my’ history I can find. The proximity between historian and subject in trans history is negligible. Separated by temporal and sexual difference, a historian in twenty-first century Oxford can feel affinity with a fourteenth century sex worker because of the trans experience. The trans historian is not only working to uncover the past, they are waging a concurrent battle to legitimate their place in a society attempting to legislate them out of existence. Jules Gill-Peterson’s “crying in the archive” methodology of trans history is inescapable; the archive’s proximity “disperses being overwhelmed by the present”, and this act of recognition functions as our right to exist through historicising transness. Rykener helped me to realise that the confusion in my interactions with the world are not because I’m morally wrong, but because society has difficulty conceiving the transgender body. The pronoun switching mid interrogation - ut cum muliere - reminds me of how my extended family slip up with pronouns, or forget my name. At clinics, I’m asked if I have sex with men who have sex with men, or straight men. My legal sex is both male and female: male on my passport, female on my birth certificate. Reminding me that if I died today, my death would be registered as female. The insistence on referring to Rykener as male, and the later historian’s erasure of hir identity, speaks to my existential fears that after my own death historians will present me as something other than what I am. Like Rykener, my body confuses. I don’t exist as a whole, but as dispersed and contradictory statements. Yet through Rykener I see a strength in recognising one’s own revolutionary potential - an unwillingness to apologise for existing, for surviving and for breaking tradition. The trans person exists as a puzzle to those who interact with - by existing, we have always unsettled the status quo. There is power in the undefinability, something that is often abandoned in favour of cisgender acceptance. Hir gender transgressions stun both the scribe and historian, forcing them to accept on some level that ze was not entirely male; ze did not simply dress up, others consistently perceived hir as female even and especially during sex. Yet in the same breath the scribes and scholars attempt to force hir into boxes that ze most likely didn’t even conceive of - Rykener is a sodomite, a sinner, a transvestite, a crossdresser and a pervert. Rykener cannot speak for hirself, and we can never know how ze thought of hir identity. As historians, we rely solely on how others conceived hir to understand on any level. Neither scribe nor historian knew what ze was. Within their frame of reference, Rykener is a deviant of a specific kind. In a way this is comforting. Trans people to this day are defined by hostile interrogators and theorists; part and parcel being trans in the current climate is being forced to consider how my existence will negatively affect the willingly ignorant. I spent most of my teens attempting to assuage hostile criticism by conforming to ‘traditional’ masculinity, hating every part of myself that I thought was ambiguous and filled with guilt whenever I corrected anyone about my name. I’ve since found power in the confusion, in being undefinable by the straight and cisgender, and I refuse to apologise for that. Rykener is unknowable, and that’s okay. It's good that we don’t understand hir. Rykener’s power comes from hir undefinability, from the confusion ze created during the interrogation and for generations of historians. My transness now takes comfort in its inherent confusion instead of running from it, and will no longer be watered down for the comfort of others. Eliott Rose is currently completing a BA in History at the University of Oxford (Regent's Park College) and will go on to undertake an MA there next year. Notes: [1] Hir is a pronoun used to refer to a person of unspecified or non-binary gender, instead of ‘him’ or ‘her’. (Oxford Dictionary) https://www.lexico.com/definition/hir [2] Ze is a pronoun used to refer to a person of unspecified or non-binary gender, instead of ‘he’ or ‘she’. (Oxford Dictionary) https://www.lexico.com/definition/ze
- Review: Jalal Al-e Ahmad's 'Westoxification'
Introduction Since its initial publication in 1962, Jalal Al-e Ahmad’s ‘Westoxification’ has been adopted, co-opted, and distorted by generations of Islamic revolutionaries and terrorist organisations alike. Considering his role in sparking not only revolution in Iran, but events further afield too, this essay seeks to determine the extent to which Westoxification can be linked to anti-western sentiment and its products since 1979. Although the bloody consolidation of the Islamic Republic poses a challenge to historians seeking a ‘neutral’ reading of Al-e Ahmad,[1] when anachronistic moral judgement is avoided, manipulation of his original theory is illuminated. While his ideas have been weaponised by radical figures from Ayatollah Khomeini to Osama bin Laden, a closer reading of his ideology undermines suggestion that Al-e Ahmad was himself an Islamic fundamentalist. Instead, noting his modernist inclinations, resistance to nativism and tentative theological leanings, contemporary usage of Westoxification is best understood as an extremist interpretation of Al-e Ahmad’s initial ideas. The nuance lies however, in recognising that such co-option has been enabled by virtue of the nature of the theory itself. In his aversion to specificity and refusal to articulate a cure to the disease he diagnoses, the treatment for this sickness has been determined on his behalf. In discussion of Westoxification’s impact on Iran and on Islamic anti-westernism elsewhere, this essay seeks to distinguish between Al-e Ahmad’s original conceptions and the radical interpretations that have resulted from the malleability of his theory. While Westoxification succeeded in uniting Iranian people against a common enemy, situating their struggles within a broader context of colonial suffering, his tendency towards generalisation has provided an ideological tool kit been wielded by Islamic extremists for decades. Historiography and Context Three key areas of historiographical debate surround Al-e Ahmad’s Westoxification, all of which useful in informing a discussion on the manipulation of his original theory. Reminiscent of the ideas of Bernard Lewis, Liora Hendelman-Baavur identifies Al-e Ahmad as an anti-modernist.[2] In opposition to this reductive narrative that conflates anti-westernism with anti-modernism, this essay draws on the arguments of Farzin Vadhat and Shirin S. Deylami.[3] Using their illumination of Westoxification’s support for modernist progress outside of western conceptualisation, Hendelman-Baavur’s conclusions can be categorised among those interpretations based on the nature of Islamic Republic, as opposed to Al-e Ahmad’s theory as it was written. In a similar vein, and in reference to the work of Eskandar Sadeghi-Boroujerdi, this essay also rejects Abdollah Zahiri’s and Khalil Mahmoodi’s nativist readings of Westoxification. Although the regime that emerged in 1979 was indisputably nationalistic, suggesting Khomeini’s preoccupation with internal affairs to be a fair interpretation of Al-e Ahmad would be a failure to recognise both the wider context in which he wrote, and the global impact of his ideas.[4] Finally, and perhaps most importantly, historians remain in staunch disagreement as to the extent to which the revolution can be seen as a direct realisation of Al-e Ahmad’s text. While this essay is resistant to Hamid Algar’s suggestion that Westoxification ultimately led to revolution,[5] it equally seeks to demonstrate that it was in Ahmad’s unification of Iranian society against a common enemy that Khomeini and his peers were able to seize hold of the revolutionary discourse towards decidedly Islamic ends. As such, although Dabashi is correct in stressing that Al-e Ahmad himself was far from an Islamist fanatic,[6] it remains undeniable that his theory has provided a platform on which their ideas may be grounded. To justify this essay’s historiographical position, a discussion on context is necessary. Son of a cleric and former attendee of the Najaf seminary in Iraq, Al-e Ahmad was well-versed in theology. Despite his background and the impact of his writing on the Islamic world, scholarship dedicated to contextualising his ideas within the history of Islamic ideology remains underwhelming. While individuals like Ibn Taymiyyah, Sayyid Qutb and Hasan al-Banna belonged to the Sunni branch of Islam, the resistance to anti-westernism and favour of Islamic revivalism documented in Shi’ite Al-e Ahmad’s writings is equally apparent in their works. Whether in Qutb’s theory of Jahiliyyah and its opposition to tyrannical rule,[7]al-Banna’s resistance to the spread of sinful ideology by British occupiers in Egypt,[8] or Taymiyyah’s condemnation of the Kufr (unbeliever),[9] a distaste for westernisation links Al-e Ahmad to generations of Islamic thinkers. While it is difficult to determine his personal familiarity with these ideologues, regardless of his exposure to their teachings, there is value in situating Westoxification within this longue durée of Islamic ideology. Although doing so in no way serves to legitimise claims of an inherent anti-westernism in Qur’anic theory, it does enable an understanding of how Al-e Ahmad can be elevated to a position similar to that occupied by radical Islamic thinkers that preceded him. In this, the scene is set for the eventual co-option of his ideas by extremist organisations that find legitimacy in the ideas of the individual. To resign Al-e Ahmad solely to the realm of religion, however, would be a failure to appreciate the influence of wider intellectual theory. Having experienced the autocratic rule of Mohamad Reza Shah since his birth in 1923, he began to seek alternative outlets for his opposition to Iran’s decline under western-style leadership. Experimenting with communism during his membership to the Tudeh Party in the 1950s,[10] authoring politically engaged fiction, and reading widely on matters of Third Worldism in the works of Franz Fanon and Aimé Césaire, [11] Jalal Al-e Ahmad and his work are best understood as a product of a multifaceted and pluralistic environment.[12] Beyond a simple recollection of fact, this exercise in contextualisation succeeds in situating the theory of Westoxification within a broader history of theology and intellectualism. While Al-e Ahmad may have failed to anticipate the reach of his theoretical musings, their vast impact on both Iranian political discourse and the nature of radical Islam comes as no surprise considering the historical moment in which they were conceived. Although this context serves to demonstrate that a resistance to western infiltration was far from a revolutionary concept, it is exactly because of the popularity of these ideas that Al-e Ahmad’s summation of opposition to the west achieved such resonance. Westoxification and the Islamic Republic Over the course of its eleven chapters, Al-e Ahmad’s text diagnoses Iranian society and its inhabitants with a terminal illness; Westoxification. Although the term was coined by philosopher Ahmad Fardid in the 1950s, it was Al-e Ahmad that solidified its place in popular discourse. Highlighting a crisis of authenticity in Iran, his multi-dimensional volume undermines historic Persian ‘jealousy’ of the west,[13] instead illuminating the devastating role it has played in the degradation of Iranian civilisation. Whether owing to alienation by Qajar grandeur or Pahlavi mimicry of western superpowers, this battle with superficiality and the absence of an authentic Iranian identity had been at the forefront of the Iranian psyche for decades. When Al-e Ahmad wrote Westoxification (initially privately published and distributed clandestinely among friends and intellectuals),[14] he thus voiced the concern of generations with a fervour and focus previously unprecedented. Noting a system of top-down autocratic modernisation,[15] Al-e-Ahmad argued that consumer capitalism and its materialistic perspective had sparked the decay of humanity.[16] In framing his opposition in these economic, political, and arguably colonial terms, his arguments found resonance among everyone from secular intellectuals to religious clerics, uniting a society once segregated by differing ideological convictions. While it is his diagnosis that sparked revolutionary unity in Iran, however, it is by virtue of his failure to provide a cure to the disease that this enthusiasm was harnessed for decidedly Islamic results. Despite merit to Gholam R. Vatandoust’s suggestion that Al-e Ahmad saw in ‘Shi’ism the vehicle to immunise society against the West’,[17] (particularly considering his recognition of clerical importance and employment of Qur’anic verse towards the end of his text), his theological convictions remain tentative. Refusing to advocate explicitly for the imposition of an Islamic state while simultaneously noting the desirable authenticity of faith, Al-e Ahmad sparked an open-ended discourse that enabled Ruhollah Khomeini and his associates to prescribe a religious regime as the cure to Iran’s western ailments. Drawing on Al-e Ahmad’s generalised discussion of colonial oppression, Khomeini utilised Westoxification’s Third Wordlist allusions and undefined receptivity to a theological solution to his advantage. Positioning Islam as a symbol of Iranian identity that offered the only defence in the battle between mostakbarin and mosta’zafin (oppressors and oppressed), while Westoxification may have ‘opened the road’ for a return to an authentic Iranian self, it was Khomeini that welcomed the responsibility of determining its nature.[18] In addition to its discussion of symptoms, Westoxification also identifies the diseased. Criticising all Iranians, from intellectuals to everyday consumers, Al-e Ahmad draws on Fanon’s notion of colonised mentalities to demonstrate that individuals have succumbed to a life without ‘belief or conviction.’[19] Although he employs an economic framework, noting the western ‘machine’ as the embodiment of its infiltration (an approach likely linked to his early communist sympathies), Al-e Ahmad is careful to stress the extent to which the infection has spread. Having deemed much of society little more than imitations devoid of substance, the convictions of Ayatollah Khomeini (illuminated in his decisive campaign against the ‘Great Satan’), appeared as a direct and radical answer to Al-e Ahmad’s call for authenticity. To a community supposedly bereft of principle, this extreme approach was a seemingly desirable (at least when considering the desperation of Al-e Ahmad’s pleas) antithesis. As such, by virtue of its tentative Islamic musings and convincing identification of a problem requiring an immediate solution, Westoxification provided both a catalyst and justification for Khomeini’s Islamic Republic. A Global Perspective: Westoxification and Islamic Terrorism In 2001, the Taliban wielded the term ‘Westoxification’ as a label of condemnation against those who disagreed with their regime in Afghanistan.[20] In ISIS, owing to a belief in the necessity of violent severance of Westoxified associations, beheadings are utilised to initiate European-born recruits.[21] In Nigeria, Boko Haram have established an entire identity founded on a resistance to the Westoxification of society and its culture of corruption.[22] As such, although Khomeini’s religious repurposing of Westoxification in 1979 remains the most significant example of its radical interpretation, the link between Al-e Ahmad’s theory and Islamic extremism is extensive. In examination of three key organisations, this section of the essay thus seeks to demonstrate the weaponization of Westoxification by modern terrorist organisations, a process enabled by the malleability of Jalal Al-e Ahmad’s theory. Owing to their links to Lebanese terrorist organisation Hizb’allah, in 1984 the United States designated the state of Iran as a terrorist sponsor. Sharing a conviction in the perils of Westoxification and equating the revolution of 1979 with a successful realisation of anti-western sentiment, Hizb’allah has consistently looked to Iran for both practical assistance and ideological inspiration.[23] Stating in their 1985 open letter a desire to ‘put an end to any colonialist entity’ in Lebanon, the organisation has claimed Khomeini as their ‘tutor and faqih’.[24] While there is little scholarship exploring Hizb’allah’s familiarity with Al-e Ahmad specifically, their advocation for the Islamic Republic’s ardent resistance to western infiltration (a sentiment grounded in the theory of Westoxification) demonstrates the elevation of his ideas to a position of reverence among Islamists seeking justification for their hostility towards western targets. This is further evidenced in the relationship between Westoxification and Al-Qaeda. First and foremost, the terrorist organisation shares with Al-e Ahmad an opposition to the west. Articulated in a 2011 issue of their propagandic ‘Inspire’ magazine, the group applauds Iran’s success in conjuring a ‘rallying call’ for Muslims based on resistance to American aggression.[25] Although Westoxification is not noted by name, a contextual reading of Iranian history serves to demonstrate that this resistance was at least in part a product of Al-e Ahmad’s writing. Consider this alongside Osama bin Laden’s admiration for Iranian revolutionaries’ use of conflict with the west as a ‘Trojan horse’ for radical Islam, and the role of Westoxification in providing a foundation for extremist ideology is demonstrated.[26] Al-Qaeda’s conviction in the obligatory nature of jihad also derives inspiration from Al-e Ahmad. Reminiscent of his identification of westernisation in both internal (that is, Iran’s population) and external elements of society, bin Laden has expressed vocal support for both internal and external jihad.[27] In Al-e Ahmad’s resistance to both submission and nativism as responses to western infiltration (his book criticises those who simply ‘turn inwards’), links can be made to Al-Qaeda’s desire to spread the messages of Islamic extremism beyond national borders. Although their understanding of Westoxification has distorted Al-e Ahmad’s condemnation of western mimicry into a representation of westernisation as something inherently evil, it is by virtue of the text’s susceptibility to interpretation that such radical manipulations are made possible. While both Al-Qaeda and Hizb’allah have utilised Al-e Ahmad’s theory as justification for their attacks on perceived external targets, Egyptian Al-Jihad was more focused on Westoxification’s legitimisation of radical action against tyrannical rule. Before his assassination in 1981, President Anwar Sadat embarked on a programme of westernised reform not dissimilar to that undertaken by the Shah in Iran. Identified as a prime example of Jahili rule by Al-Jihad’s ideological theorist Muhammad abd-al-Salam Faraj, Sadat might also be considered an embodiment of the diseased ‘occidentotic leader’ Al-e Ahmad condemns in his text.[28] As such, although the writings of Sayyid Qutb remain the primary source of inspiration for the Egyptian terrorist group, the links between their condemnation of apostate leaders and Al-e Ahmad’s opposition to the Pahlavi regime are clear. Consider this alongside Sadat’s well-evidenced opposition to Khomeini and support for the Shah, and Al-Jihad’s weaponization of Al-e Ahmad’s theory in pursuit of a similar change in leadership to that which took place in Iran is evidenced.[29] These examples are not to suggest that Al-e Ahmad is alone the ideological inspiration for Islamic terrorism. In most instances, Westoxification’s theoretical resistance to western mimicry has been radically distorted by extremists for purposes far beyond what Al-e Ahmad might have anticipated. Despite these manipulations, however, in situating his theory within a wider history of radical Islamic ideology and considering the inspiration his anti-western sentiment has provided for organisations seeking justification for their hostility, the relationship between Westoxification and religious extremism is demonstrated. Jalal Al-e Ahmad: Misunderstood? Having exemplified how Westoxification impacted both the 1979 revolution, and the ideology of Islamic terrorist organisations since, this section of the essay seeks clarification. Although Al-e Ahmad’s theory certainly lent itself to radical interpretation, a closer reading of his text serves to illuminate the exact manner in which his ideas have been distorted. While scholars such as Liora Hendelman-Baavur have identified Al-e Ahmad as an anti-modernist (a label based on the nature of the Islamic Republic), much of Westoxification is in line with the tenets of modernism. In his reluctance to vilify the products of the west entirely,[30] his work is better read as a critique of the loss of subjectivity in Iran, rather than its shift towards modernism. While Khomeini and other Islamic fundamentalists tended to reject westernisation in its totality, Al-e Ahmad sought a world in which it might be utilised to Iranian advantage.[31] Similarly, and in contrast to the decidedly nationalistic sentiments of the Republic’s first Supreme Leader, a nativist reading of Westoxification is also misguided. Criticising those who ‘retreated into the shell of a national state’,[32] Al-e Ahmad opposed the kind of close-minded approach adopted by organisations like Hizb’allah. As such, although it is perhaps unsurprising that his ardent opposition to westernisation has been translated as advocation for ‘eastern’ nationalism, such an interpretation exists as another distortion of Al-e Ahmad’s original theory. Finally, and perhaps most importantly, the extent to which Westoxification can be considered a theological document is debatable. While its ideology finds resonance in Islamic doctrine, Al-e Ahmad is adamant in his criticism of religious ‘superstition’ and outdated tradition.[33] While such discussion remains undeveloped, this critique raises questions regarding his confidence in clerical suitability for revolutionary leadership.[34] In this, more so than anything else, the ultimate contradiction of Westoxification is illuminated. Whether deemed a fatal flaw by those who stand in objection to the Islamic Republic and the extremism it has inspired or applauded as its greatest asset by Islamists around the world reconciled to a single cause, Westoxification’s failure to articulate a specific solution to the disease it diagnoses instils in its readers an unavoidable onus of interpretation. Conclusion Characterised aptly by Sadeghi-Boroujerdi as ‘insensitive, slapdash, imprecise and polemical’,[35] despite Al-e Ahmad’s intellectual background, this is not a text that boasts scholarly refinement. However, while the book is riddled with historical inaccuracies, half-hearted conclusions (‘please don’t ask me to go into details’),[36] and confusingly elaborate metaphors, it is precisely in the context of these ambiguities that the text’s impact on Iran and Islamic organisations further afield might be understood. In its convincing identification of a disease terminal to the authenticity of society, Al-e Ahmad captured the imagination of generations of revolutionaries resistant to western encroachment. Inspired by his condemnation of western materialism and its products, Westoxification has offered an ideological foundation for organisations seeking justification for their anti-western ideology since the 1960s. With decisive articulation of a cure to this disease notably lacking from Al-e Ahmad’s work, however, individuals from Khomeini to bin Laden have been able to situate the theory within a wider Islamic framework. Although closer examination of the text serves to demonstrate that these radical interpretations are something of a distortion of Al-e Ahmad’s original ideas (particularly considering his modernist inclinations and cautious religious sentiment), the impact of a theory so susceptible to distortion on the nature of Islamic anti-westernism is undeniable. As such, Westoxification is best considered as something of a lieux de memoire; a significant and symbolic site of memory visited, utilised, and ultimately radicalised by generations of Islamists around the world. Harriet Solomon is currently pursuing an MA in Modern History at the London School of Economics. Notes: [1] Eskandar Sadeghi-Boroujerdi, ‘Review: The Last Muslim Intellectual: The Life and Legacy of Jalal Al-E Ahmad, Hamid Dabashi’ (Edinburgh: Edinburgh University Press, 2021), Iranian Studies (2022), p. 1. [2] Liora Hendelman-Baavur, ‘The odyssey of Jalal Al-Ahmad’s Gharbzadegi – Five decades after’ in Talattof, Kamran. Persian Language, Literature and Culture (London: Routledge, 2015), p. 261. [3] Shirin S. Deylami, ‘In the Face of the Machine: Westoxification, Cultural Globalisation, and the Making of an Alternative Global Modernity’, Polity, Vol. 43, No. 2 (2011), p. 244. [4] Eskandar Sadeghi-Boroujerdi, ‘Gharbzadegi, colonial capitalism and the state in Iran’, Postcolonial Studies, Vol. 24, No. 2 (2021), p. 174. [5] Hamid Algar, ‘Introduction’ in Ahmad, Jalal Al-I. Occidentosis: A Plague From the West (Berkley: Mizan Press, 1984), p. 8. [6] Hamid Dabashi, The Last Muslim Intellectual: The Life and Legacy of Jalal Al-e Ahmad (Edinburgh: Edinburgh Scholarship Online, 2021), p. 46. [7] Joshua J. Yates, ‘The Resurgence of Jihad and the Specter of Religious Populism’, The SAIS Review of International Affairs, Vol. 27, No. 2 (2007), p. 133. [8] Ran A. Levy, ‘The idea of jihad and its evolution: Hasan al-Banna and the society of Muslim Brothers’, Die Welt des Islam, Vol. 54, No. 2 (2014), p. 154. [9] Ibn Taymiyyah, The Religious and Moral Jihad (Birmingham: Maktabah Al Ansaar, 2001), p. 9. [10] Farzin Vadhat, ‘Return to which Self? Jalal Al-e Ahmad and the Discourse of Modernity’, Journal of Iranian Research and Analysis, Vol. 16, No. 2 (2000), p. 61. [11] Eskandar Sadeghi-Boroujerdi, ‘Gharbzadegi, colonial capitalism and the state in Iran’, Postcolonial Studies, Vol. 24, No. 2 (2021), p. 180. [12] Dabashi, The Last Muslim Intellectual, p. 281. [13] Jalal Al-e. Ahmad, Occidentosis: A Plague From the West, translated by R. Campbell (Berkley: Mizan Press, 1984), p. 43. [14] Hamid Dabashi, Theology of Discontent: The Ideological Foundation of the Islamic Revolution in Iran (New Brunswick: Transaction Publishers, 2006), p. 76. [15] Mehdi Faraji and Ali Mirsepassi, ‘De-Politicizing Westoxification: The Case of Bonyad Monthly,’ British Journal of Middle Eastern Studies, Vol. 45, No. 3 (2018), p. 358. [16] Al-e Ahmad, Occidentosis, p. 133. [17] Gholam R. Vatandoust, ‘Review: Occidentosis: A Plague from the West, (Contemporary Islamic Thought Series’, Middle East Studies Association Bulletin, Vol. 19, No. 2 (1985), p. 237. [18] Vadhat, ‘Return to which Self?’, p. 67. [19] Al-e Ahmad, Occidentosis, p. 94. [20] Ahmad Rashid Salim, ‘The Taliban vs Global Islam: Politics, Power and the Public in Afghanistan’, Berkley Center (2021). [21] Robert J. Bunker and Dave Dilegge, Jihadi Terrorism, Insurgency and the Islamic State: A Small Wars Journal Anthology (Bloomington: XLIBRIS, 2017). [22] Suranjan Weeraratne, ‘Theorising the Expansion of the Boko Haram Insurgency in Nigeria’, Terrorism and Political Violence, Vol. 29, No. 4 (2017). [23] Daniel Byman, Deadly Connections: States that Sponsor Terrorism (Cambridge: Cambridge University Press, 2012). [24] ‘The Hizballah Program: An Open Letter’ (1985), The Jerusalem Quarterly (1988) [Accessed 16 March 2022]. [25] Abu Suhail, ‘Iran and the Conspiracy Theories’, Inspire Magazine, Fall, Vol. 1432, No. 7 (2011). [26] Michael Doran, ‘The Pragmatic Fanaticism of al Qaeda: An Anatomy of Extremism in Middle Eastern Politics’, Political Science Quarterly, Vol. 117, No. 2 (2002), p. 184. [27] Yates, ‘The Resurgence of Jihad and the Specter of Religious Populism’, p. 134. [28] Al-e Ahmad, Occidentosis, p. 93. [29] Saad Eddin Ibrahim, ‘Anatomy of Egypt’s Militant Islamic Groups: Methodological Note and Preliminary Findings’, International Journal of Middle East Studies, Vol. 12, No. 4 (1980), p. 438. [30] Deylami, ‘In the Face of the Machine’, p. 250. [31] Vadhat, ‘Return to which Self?’, p. 67. [32] Al-e Ahmad, Occidentosis, p. 74. [33] Ibid, p. 73. [34] Homa Omid, ‘Theocracy or democracy? The critics of ‘westoxification’ and the politics of fundamentalism in Iran’, Third World Quarterly, Vol. 13, No. 4 (1992), p. 677. [35] Sadeghi-Boroujerdi, ‘Gharbzadegi, colonial capitalism and the state in Iran’, p. 185. [36] Al-e Ahmad, Occidentosis, p. 79.
- Science as a Tool for Creating ‘Others’ Within European Societies
During the period between late nineteenth century and the end of the Second World War, science became a tool for categorizing and hierarchizing people of different races, phenotypes, and hereditary qualities in Europe. Reliance on science as a mechanism for ‘purifying’ and ‘fixing’ the society increased significantly after the First World War. Due to this, radical measures and practices were adopted by states. This essay will demonstrate the strong influence of science in creating ‘others’ within European societies. It will do so by examining two nation-states and their relation to science as means of constructing an ideal society: Romania’s homogenization efforts and Roma, and Italy’s Lombrosian mentality. After the First World War and the creation of Greater Romania, modernization as well as political and socio-economic improvement became two vital issues. Compared to Romania, powerful Western European stateshad (according to Romania) one thing that Romania had not: a ‘homogenized’ population. In Romania, ‘After 1918, with the doubling of the country’s territory and population, the proportion of ethnic minorities in the total population rose to almost thirty per cent.’[1] The logic behind homogenization being uniformity and national strength, Romania’s strategy became ‘homogenizing’ the ethnically mixed population, thereby agreeing to adopt ‘…central state uniformity and minority-hostile strategies of homogenization…’[2] This was achievable with the construction of organizations and institutes, making laboratory work possible. Newly established organizations and institutes encouraged eugenic research, enabling the discovery of varieties and numbers of minorities in Romania. Implicitly, this strengthened discriminatory ideas. Among the first of these establishments is The Institute for Hygiene and Social Hygiene at the University of Cluj. During the early 1930’s, the institute ‘… undertook serial genetic examinations of about 17,000 individuals from different ethnic groups in Transylvania’, and aimed to expose data concerning the ethnic position of Szeklers – a Hungarian minority.[3] Results of the serological analyses suggested that while Szeklers were assimilable, other Hungarian minorities were not. The fundamental determinator behind this result lies in the theory that Szeklers are Magyarised Romanians, which means that they are of Romanian and Turkic descent. Given that during the interwar years, ‘… 500,000 Szeklers constituted more than eighty per cent of the total population of districts of Ciuc, Odorhei and Trei Scaune, and over forty per cent of the population of the neighbouring district of Mures’, sensation around the topic heightened.[4] To illustrate, a Geography professor at Cluj, Sabin Oprenau proposed potential assimilation strategies for Szeklers like: ‘adduced folklore, place names, and a supposed ethno-psychological proximity between Szeklers and Romanians…’[5] For unassimilable Hungarian minorities Sabin Maniula, director of the Romanian Central statistical Institute, advised ‘cross-border population exchanges and new Romanian settlements’ in pursuit of the ‘extermination’ of these people.[6] Another contributing organization was ASTRA, a Transylvania-based Romanian nationalist and cultural association. In 1926, ASTRA aided the annexation of a Department of Bio-Politics and Eugenics to Cluj Institute of Hygiene, which generated strategies for ethno-political measures and legitimization of Romanian political leadership in districts dense with Szeklers.[7] This points out that organizations andinstitutes were in close relation, serving each other as a support mechanism. Not to mention, organizations’ ability to manipulate university departments display their power within the education system. Furthermore, ASTRA supported researchers with their studies too; In 1924, Sabin Manuila and Gheorghe Popoviciu, a Professor of Paediatrics and Pharmacology at Cluj, led serial racial tests on ‘… 2,512 Romanians, Hungarians and members of other Transylvanian ethnic groups.’[8] Manuila and Popoviciu relied on serological analysis – a method thought to provide precise results on race determination ‘… through isoagglutinin reactions of the human blood.’[9] This research further supports the claim made earlier, which isthat organizations and institutes fuelled ongoing research, simultaneously normalizing the categorization of peoples according to their ethnicities and racial backgrounds. Acquired through science, new data on different minorities and their numbers illustrated the urgency of homogenizing the society. Among minorities targeted as ‘obstacles’ were Roma; several studies were conducted, Ion Chelcea’s field study being a preeminent one. Chelcea’s research comprised of different Roma tribes in 63 villages, concentrating on population size and geographical distribution. The study held sedentary Roma groups and ‘underdeveloped’ nomadic groups separately. This revelation combined previous biological ideas – one being Iordache Facaoaru’s claim that compared to that of Aryans, Roma are mentally less intelligent and physically less strong – with sociological notions such as profession and social class, socio-economic integration, and gradation of linguistic assimilation.[10] According to these inferences, Chelcea determined the ‘types’ of assimilable Roma by disintegrating the minority into three ‘others’: the decision was that the Corturari were unassimilable nomads while the sedentary Rudari and Tigani de Sat are differently assimilated groups.[11]Following this, Chelcea’s ‘taxonomy of lifestyle’ suggested the legitimization of discriminatory population policies and criminalization for better ‘public health.’[12] These baseless claims prove one thing, and that is justification of discrimination through science. By doing so, many researchers like Chelcea drew attention to the ‘question of Roma’ and labelled them as the ‘others’ of Romania, suggesting the kind of measures that should be taken against them. The Romanian government took researchers’ inputs as a chance to start homogenizing Romania by persecuting Roma. Efforts were first realized during Ion Antonescu’s leadership between 1940 and 1944. First-time anti-Roma measures were discussed was on 7 February 1941 by the Council of Ministers.[13] The council mainly discussed ethnopolitics, hence Antonescu proposed ‘… a forced transfer of Bucharest Roma to the Baragan plain’, which was carried out in June 1942. It is known that 25,000 Roma were deported to Transnistria, consequently ‘… Transnistria became to Romania much what the General Government was to the Third Reich: the place for executing racial policy.’[14] In fact, ‘… Transnistria was not a Romanian territory and was not to become one…’[15] This means that Transnistria was suitable for keeping the Roma. Simply, the ‘undesired’ Roma population was ‘exterminated’ by being kept in an enclosed territory that is a land for ‘others.’ This solution sought homogenization, modernization, and political and socio- economic stability for Romania. The relation between nation and state was one of scientific knowledge, having its foundations inbiology. This belief constructed mutualness and support of organizations, institutes, and the state in the matter of homogenizing and creating ‘others.’ A similar example of utilizing science for reconstructing society was carried forth by Cesare Lombroso, a criminologist in the late nineteenth and early twentieth century, who sought to identify, document, and contain criminals ‘lurking’ among the public. His approach in criminal anthropology influenced Italy remarkably, the most prominent examples being scientific policemen and the 1930 Penal Code. Lombroso argued that it is possible to predetermine criminals by inspecting certain physical and psychological elements. Lombroso specifically relied on craniology as a physical element. For instance, Lombroso examined thief and arsonist Giuseppe Villella’s skull and discovered something unusual: ‘… on the occipital part, where a spine would normally be found on a human skull, there was, instead, a distinct anomaly that he called the median occipital fosetta.’[16] Lombroso claimed that this anomaly is hereditary and indicates the person is born a criminal. Similarly, in the first chapter of his book Criminal Man, Lombroso supports his craniological arguments with atable chart.[17] On the table, Lombroso has noted the following: province, name, age, crime, and circumference of the cranium. This detail suggests that location and age are relevant to the investigation, hence associating them with criminals. Lombroso determines that the fundamental difference lies in the fact that most criminals have a smaller circumference than that of ‘normal’ people. These examples illustrate that during the time, one condition that triggered otherising people in Italy was heredity.Psychological characteristics are split into two as verbal and non-verbal (mainly consisting of body language) manifestations.[18] Lombroso collected and analysed prisoners’ writings – mostly proclamations, poems, and even signatures – and published them in Prison Palimpsests. Recurring subjects in these writings are as follows: ‘… crime committed, sex, religion, prison, and revenge.’[19] The result of his analyses lines out the ‘criminal type’: ‘… egocentric, detached from others, vain, vengeful, and deceptively religious.’[20]Ide Lombroso further explains that criminals do notspeak the same language as truthful men because they feel different, thus speak different. Best example for non-verbal manifestations are tattoos: Lombroso argued that tattoos were common among criminals which was a way of communication for them. To demonstrate, the first edition of Criminal Man included a picture with ‘… a prisoner displaying several tattoos, including snakes, an emblem of Savoy on his penis and crossed daggers surrounded by the motto ‘I swear to revenge myself’ on his chest.’[21] This exampleencapsulates both verbal and non-verbal characteristics. Briefly, Lombrosian mentality seeks to distinguish criminals within the society by relying on a list of indicators which are categorized as physical and psychological characteristics. This rationale serves as a ‘rulebook’ for ‘hunting down’ and ‘captivating’ marginalized people. To operationalise his rationale that is designed to fit marginalised people, Lombroso invented the ‘scientific police.’ The scientific police became an integral part of Italy in separating criminals from society. Aiming to use scientific tools such as photography and telegraphy combined with knowledge of the criminal man, scientific policing became popular through the establishment of the School of Scientific Policing in 1907 by Salvatore Ottolenghi.[22] Close proximity to prisons enabled students to study prisoners and analyse fingerprints and mugshots. This procedureuses prisons as an enclosed space for ‘experimenting’ with prisoners. The juxtaposition of ‘honest’ men and prisoners further reinforces the marginalization of criminals. Development of the ‘Ottolenghi method’ of anthropo-biographical cards in 1902 included ‘… the body, the psyche, and the past history of the criminal…’[23] These cards were used to better identify criminals. To demonstrate, Ottolenghi claims that ‘fighting criminality’ should line up with ‘… those methods which have triumphed in the treatment of the insane (…) and have proved a marked success in animal breeding and even the taming of wild beasts.’[24] This comment associates criminals with animals, advises medical treatment, and interferes with the management of these people. The essence of ‘Ottolenghi method’ lies in the interaction between social, political, and scientific aspects: a coalition of sort between politics and science in order to control and contain society. Shortly, scientific policing was Lombrosian mentality ‘in flesh’, tackling undesired individuals according to their biological and psychological traits. Moreover, this rationale influenced the 1930 Italian Penal Code: the code uses ‘suspiciousness’ to measure dangerousness. Evidently, Article 203 concerning social danger states that a person is socially dangerous when they commit an action opposing criminal law – even if not attributable or punishable – is probable.[25] Similarly, title of the second book is ‘Crimes against the Personality of the State.’[26] Both examples indicate that criminal is whoever the state considers a threat to itself. Shortly, Lombrosian logic was used in parts of the system to ensure that dangerous ‘others’ were kept away from the rest. To conclude, from late nineteenth century to 1945, European societies turned to science to ‘fix’ anomalies by othering certain groups. Eugenics to craniology; serology to criminal anthropology, academics found ways to categorize those who were distinct in looks, thoughts, and origins. Combined with politics, marginalization of peoples became a procedure practiced on two scales: by nation-states and researchers. Consequently, science became a support mechanism for constructing an ideal society. Duru Akin has just completed her first year of a BA in English Literature and History at Durham University (College of St Hild and St Bede). Notes: [1] Michael Wedekind, The Mathematization of the Human Being: Anthropology and Ethno-Politics in Romania During the Late 1930s and Early 1940s’, New Zealand Slavonic Journal, p. 28. [2] Ibid. [3] Idem, p. 32. [4] Ibid. [5] Idem, p. 33. [6] Ibid. [7] Idem, p. 34. [8] Ibid. [9] Ibid. [10] Idem, pp. 46-48. [11] Ibid. [12] Ibid. [13] Idem, p. 42. [14] Viorel Achim, The Roma in Romanian History, (Budapest: Central European University Press, 2004) p. Roma in Romania p. 182, and Wedekind, ‘The Mathematization of the Human Being: Anthropology and Ethno-Politics in Romania During the Late 1930s and Early 1940s’, p. 50 [15] Achim, The Roma in Romanian History, p. 184. [16] Emilia Musumeci, ‘Against the Rising Tide of Crime: Cesare Lombroso and Control of the “Dangerous Classes” in Italy, 1861-1940, Crime, History & Societies p. 86. [17] Cesare Lombroso, ‘Criminal Craniums (Sixty-six Skulls)’, Criminal Man, 2nd ed. (New York: Duke University Press, 2007) pp. 46-47. [18] Musumeci, p. 88. [19] Idem, p. 89. [20] Idem, p. 88. [21] Idem, p. 89. [22] Idem, p. 91. [23] Idem, p. 92. [24] Salvatore Ottolenghi, Victor von Borosini, ‘The Scientific Police’, Journal of the American Institute of Criminal Law and Criminology, p. 877. [25] Articolo 203. Pericolosita’ Sociale, 8 Codice Penale Italiano (1930) [26] Giulio Battaglini, ‘Fascist Reform of the Penal Law in Italy’, Journal of Criminal Law and Criminology, p. 278.
- New Latin American Cold War Historiography and the coups of Guatemala in 1954 and Chile in 1973
Recent Latin American Cold War historiography attempts to transcend scholarship in the 80s and 90s that tethered the region’s Cold War tribulations to panoramic causes like the sway of ideology or the whim of omnipotent states (the USSR and the US) looking to reify incompatible economic systems. Gilbert M. Joseph divides that scholarship into two camps: the “realists” concerned with geopolitical strategy, and the New Left or “revisionists,” who granted causal supremacy to the US and its wish to expand liberal capitalism.[1] That the older historiography can be so neatly divided is in itself an indictment of its capacity for elucidating the panoply of non-state actors, overlapping conflicts and multi-faceted movements that made up Latin America’s Cold War. There is an irony that pervades the New Left literature which, in pillorying US ventures into Latin America, and thereby hallowing the sovereignty of non-hegemonic states, actually manages to divest the region’s people of the very agency it seeks to defend, if only on a theoretical level. Hence Vani Pettiná’s assertion that the old historiography amounts to an “appendix” of US history.[2] This essay will cross reference the new Latin American Cold War literature with the histories of two seminal events of the period: the 1954 Guatemalan coup and the 1973 Chilean coup. But such a division will prove specious: each resulted from developments long predating their occurrence, rendering the dates of 1954 and 1973 mere chronological conveniences. Furthermore, these developments were regional and transnational, so that isolated reference to Guatemala and Chile would have been inexhaustive. Indeed, the two events actually bled into each other as well as other timelines and territories. Section 1 will therefore be loosely centered on Guatemala in 1954 and section 2 will be loosely centered on Chile in 1970. Section 1 Odd Arne Westad asks the question “Why did the United States intervene in the Third World as often as it did during the Cold War?”, endowing US officials, in his answer, with patronizing aspirations: out of responsibility for the capitalist model, a need to eradicate Communism, and a desire to make the Third World “more like America.”[3] Carlotta McAllister draws on “Modernization Theory” to attribute to US officials a hankering for the “theoretical space of the perfect market system” in the Third World.[4] If US officials felt a prime compulsion to spread its country’s economic model into Latin America, then, as McAllister notes, military intervention can be rationalized as a necessary evil. How to account, then, for the divergent policies undertaken against the 1952 Movimiento Nacionalista Revolucionario (MNR) in Bolivia and the Jacobo Árbenz government in Guatemala? In a historical review of covert actions against Árbenz (operations PBFORTUNE and PBSUCCESS), the CIA History Staff outlined its agency’s motivations: concern over growing Communist influence within the Guatemalan government, agrarian reform which redistributed the United Fruit Company’s (UFC) land to peasants and local workers, and the possibility that Guatemala could become a Soviet client state.[5] CIA concerns were encapsulated by Decree 900, introduced by Árbenz in 1952, which redistributed land in usufruct from landowners, of which the UCF was the largest, to tenant farmers, sharecroppers and agricultural laborers. Particularly concerning was José Manuel Fortuny, a member of the Communist-affiliated Guatemalan Worker’s Party to whom Árbenz had given a prominent role in writing the Decree.[6] By the fall of 1953, Árbenz had legalized the Guatemalan Communist Party, spurring US officials to authorize operation PBSUCCESS which, unlike its forerunner, PBFORTUNE, concluded with a successful coup in June 1954. Both operations were comprehensive and included providing arms to anti-Communist Guatemalan exiles led by officer Castillo Armas, psychological scare tactics like sending “death notice” cards to key Communists, and compiling hit lists of Guatemalan leaders to be assassinated by Castillo or Trujillo-led assassins, though no assassinations were carried out at the behest of the CIA.[7] PBSUCCESS’ avowed goal was to “to remove covertly, and without bloodshed if possible, the menace of the present Communist-controlled government of Guatemala.”[8] It is remarkable that similar fears did not materialize among US officials over Víctor Paz’ presidency in Bolivia. Judging from the motivations for covert action against Árbenz, the Bolivian revolution of April 1952 would seem ripe for similar treatment. The MNR had ideological ties to Communism, garnering support from local Marxists and labour unions, promising particular reforms to the pro-Soviet Partido Comunista Boliviana in exchange for its support, and boasting of a president who had espoused an historical materialist outlook. Similarly to Decree 900, the MNR introduced widespread land reform and nationalized tin mines that threatened US business interests, leading the Reconstruction Finance Corporation (RFC) to drop out of contract negotiations for purchasing Bolivian tin in March 1953. Finally, there existed a palpable threat that the MNR could establish a working relationship with the Eastern Bloc as, on the heels of the RFC’s snub, Paz announced that his government would pursue a trade deal with Czechoslovakia.[9] Prior to the MNR’s revolution, the US accounted for two-thirds of Bolivian exports and wielded a great influence over the global tin market, a fact well-known to Bolivians who had complained about the US’ manipulating markets to bring down global tin prices.[10] US officials therefore had the economic leverage to ail the Bolivian economy and, as US Ambassador to Guatemala Rudolph Schoenfeld wished to do to the Árbenz regime, bring the MNR “to a realization that they were dependent upon the United States and that if they expected assistance or consideration from the United States it behooved them to adjust their actions vis-à-vis the United States accordingly,” with the bonus of being able to do so without bloodshed.[11] Instead, the Eisenhower administration decided to extend aid to Bolivia and prevent economic instability. Both policies shared the goal of stunting Communist influence, only in Bolivia’s case US officials reached the more benign conclusion that the MNR’s revolutionary fervor had been fostered by “the rapid degeneration of the Bolivian economy,”[12] and that helping the Bolivian economy was the best course. US officials displayed a willingness to compromise with a revolutionary government and to prop up a potentially Soviet client state, harming the generality of attributions of a capitalist, missionary zeal. Kenneth Lehman claims that the difference was due to US officials’ interpreting the Bolivian case through a situational lens and the Guatemalan case through a dispositional one. Whereas the strongly anti-American rhetoric employed by Guatemalan officials to rationalize redistributing UFC lands were deemed a threat to US hegemony, president Paz’ willingness to negotiate and the lack of a similarly antagonistic voice behind Bolivian policy allowed US officials to more objectively understand that country’s material circumstances.[13] Yet this argument is too unidirectional, analyzing US relations with each country in isolation, and therefore endowing the US with an exaggerated omnipotence. In fact, the decision to invade Guatemala had a lot to do with situational factors that presented themselves independently of US actions, as evidenced by Aaron Coy Moulton’s work on transnational Caribbean networks in the lead up to the 1954 coup. In the 1940s, dictators Anastasio Somoza in Nicaragua, Tiburcio Carías in Honduras, and Rafael Trujillo in the Dominican Republic took note of a transnational chain of Guatemalan and Venezuelan exiles, students, journalists and politicians who lobbied against and wrote condemnatorily about the so-called “remaining” fascist Caribbean dictators, having been emboldened by the Guatemalan Revolution that overthrew General Jorge Ubico in 1944. Somoza, Carías and Trujillo responded in kind, undertaking a conspiratorial, anti-Communist propaganda campaign and supporting militant exiles associated with the deposed regimes of Ubico and Eleazor López Contreras in Venezuela.[14] That the anti-Communist network was organized under US noses is exemplified by Somoza and Carías falsely denying to US officials that they were funding exiles to conspire against Juan José Arevalo in Guatemala and Rómulo Betancourt in Venezuela. When US officials urged Somoza to cease participating in the Costa Rican Civil War of March 1948, Somoza explained that he had to remain loyal to Trujillo and Carías in their fight against Communism, prioritizing the network’s interests over those of the US.[15] By the time US interests aligned with those of Somoza, Trujillo and Carías in wanting to overthrow Árbenz, the counter-revolutionaries had already laid the framework that allowed the CIA to initiate PBFORTUNE, setting up an intelligence- and arms-sharing network to which the CIA could provide further arms and funds. That situational factors were set up for a US-sponsored coup is symbolically epitomized by the fact that PBFORTUNE began after Somoza approached the Truman administration in 1952.[16] The network’s independence was begrudgingly apparent to US officials at the time, as the CIA’s intended discretion was superseded by the network’s history of knowledge sharing, which led to widespread dissemination of the US’ intent to overthrow Árbenz and coordination among militant groups in preparation for the event. About half a year after the Guatemalan coup, Calderonista rebels invaded Costa Rica with the help of Somoza, Trujillo and Castillo Armas’ militias against US officials’ wishes, doing so with the very weapons provided by the CIA for the Guatemalan coup.[17] The Guatemalan coup would have consequences for US-Chilean relations that went beyond US intentions. While operation PBSUCCESS was functioning in the background, US representatives at a meeting of the Organization of American States (OAS) in March 1954 worked to rally other member nations into ratifying anti-Communist codifications. The proposed resolution called for OAS members to intervene militarily should any American state cede its political institutions to Communism.[18] Despite the country’s affirmative vote, backlash to the resolution in Chile was extensive, drawing Socialist members of Chile’s Chamber of Deputies, who grouped themselves into the “Friends of Guatemala,” and future president and Christian Democrat Eduardo Frei. But the most noteworthy opposition came from then-Senator Salvador Allende, who became one of the principal targets of a hitherto dormant fear of Communism in Chile on the part of US officials. If the goal was to stunt Communism in Latin America, then US intervention in Guatemala proved counterintuitive, as least as regards US officials’ perceptions. As Mark T. Hove notes, US fears in Chile before the OAS meeting were directed at President Carlos Ibáñez del Campo, who had ruled dictatorially in 1927-1931 and stylized himself as a Peronist populist during his successful 1952 bid. At the time, US officials welcomed Allende as a stopgap to Ibáñez and as a candidate who could take away the communist vote while remaining, nonetheless, an “uncompromising foe of communism.”[19] But the tone changed when Allende avowed himself defender of Guatemalan sovereignty, participating in protests that reached their crescendo on 20 June 1954, three days after the Guatemalan coup, when Chilean protesters burned a US flag and an effigy of Eisenhower.[20] According to US Embassy officers in Chile, the Guatemalan coup “provided the [Chilean] communists with an issue,” which in turn gave US officials reason to fear that Chile might become the next Guatemala.[21] Section 2 William A. Booth has written about the overlapping conflicts that defined Latin America’s Cold War and how the US-USSR dynamic is just one in a set of six dyads that also includes conflicts between peasants and landowners, states and citizens, US hegemony and national sovereignty, capital and labour, and capitalism and socialism.[22] Because many of these conflicts precede the usual dating of the global Cold War (i.e. the end of World War 2), US-USSR Manicheanism is said to have latched itself onto, and often exacerbated, longstanding conflicts in the region. Perhaps the most extreme expression of this historiographical position comes from Tanya Harmer, who insists that Latin America’s Cold War truly began “somewhere between the Mexican Revolution and World War II.”[23] The OAS episode in Chile follows a similar, overlapping pattern. Chilean-US relations had been strained well before 1954 as by 1929, when US investment in Chile first surpassed UK investment, there arose in Chile a form of nationalism that called for chilenidad (“Chileanness”). This movement was highly critical of the influx of US investment and claimed itself to represent the Chilean cowboy (huaso), campesino, miner, worker and the lower classes against foreign capital, lumping its adherents into los rotos or “the broken ones.”[24] Along Booth’s lines, this early conflict between US capital and Chilean labour can be seen as a precursor to a later conflict that, from the US government’s perspective, was another episode in its global struggle against Soviet influence, and that, from the Chilean left’s perspective, was a struggle between US hegemony and Guatemalan sovereignty. Yet the Chilean case presents a novel element to Booth’s notion of overlapping and exacerbated conflicts as the OAS resolution served also to rekindle prior US-Chilean antagonism that had died down during the 40s. By 1953, Chilenidad had petered into cordiality with Claude G. Bowers, US Ambassador to Chile (1939-1953), saying to Harry S. Truman that Chile was the “most inherent real democracy” in South America.[25] Reciprocally, Allende proclaimed in 1945 that “the United States of today is not the United States of yesterday,” praising the northern colossus for its Good Neighbor Policy and its fight against fascism.[26] Hence, US fears of Chilean communism and Chilean opposition to US hegemony both rekindled and exacerbated a prior antagonism which, in 1954, was no longer restricted to US capital and Chilean labour, but encompassed also US military action in a small country 3,500 miles north of Chile. Following newly arisen concern over the Chilean left, the CIA initiated covert action in Chile designed first to hurt Allende’s chances of becoming president and then to support the Chilean military’s efforts to orchestrate a coup against the Allende government. The CIA spent $3 million on Chilean elections in 1964 and $8 million from 1970 to 1973.[27] As mentioned earlier, and as noted by Gilbert M. Joseph, one of the problems with the old Latin American Cold War literature is that it “assessed the conflict almost exclusively in terms of national interest, state policy, and the broad imperatives of the international economy.”[28] On top of state officials and intellectuals, a truly historical account must partition roles for women, the lower and middle classes, peasants, workers, students, religious actors, indigenous and ethnic groups, exiles, etc. US covert operations before the 1964 Chilean election reveal that even a focus on state policy is inexhaustive without consideration of Chilean women, if only because US officials themselves understood how crucial they were to their goal of combatting Allende. After losing his second bid for the presidency in 1958 to Jorge Alessandri by a mere 33,416 votes, US officials worried over Allende’s highly improved vote count from the 5.4% he got in 1952. Surveying electoral statistic reveals that 34% of Chilean women had voted for Jorge Alessandri against 22% for Allende, an important margin in light of the close overall result.[29] This split between the female voters came on the back of a growth in female electoral participation, a growth which the CIA sought to maintain in the lead up to the next election. Between June and September 1964, the CIA funded radio stations, news broadcasts, cartoons, press advertisements and distribution of posters, many of which displayed a keen understanding of the prototypical Chilean woman; at the time only 22% of women worked outside of their homes, mostly as servants in richer households, while 70% were housewives.[30] CIA-funded propaganda therefore appealed heavily to entrenched gender norms, portraying a hypothetical Allende presidency as a threat to the Chilean household and drawing analogues to other Marxist governments. One poster claimed that Fidel Castro had sent 15,000 children to Russia, wresting them from their mothers’ arms.[31] Radio broadcasts, especially effective on women who spent most of their time at home, claimed that women under communist society had lost their compassionate femininity and were forced into labor “which no civilized country makes people of their sex perform.”[32] By the CIA’s own account, its propaganda tactics “probably” succeeded in swaying Chilean public opinion.[33]However, despite US officials favoring the results of the Pinochet coup on 11 September 1973, it would again be a mistake to attribute omnipotence to the US by giving it full credit for deposing Allende. For one, the CIA admits that its reported links to the Christian Democratic party in the 1964 elections gave Allende a significant number of additional votes that contributed to his victory in 1970, rendering the CIA’s own tactics self-defeating.[34] Furthermore, despite the CIA’s transition into its “Track II” program after Allende’s victory, which instructed the CIA “to play a direct role in organizing a military coup d'etat in Chile” in collaboration with the Departments of State and Defense, the decision to overthrow Allende was ultimately taken independently of US intervention.[35] Even the very circumstances that allowed for a coup, as was the case in Guatemala, came about independently of US actions. By May 1973, the CIA was aware that members of the Chilean military were plotting a coup, but the CIA was hesitant to pursue this plot over concern that the attempt would be blocked by General Carlos Prats. These concerns were vindicated when, on June 29th, Chile’s Second Armored Regiment attempted an overthrow known as the Tanquetazo, and were rebuffed by General Prats and sectors of the military loyal to the government.[36] If General Prats was the biggest preventative factor, then it was through the initiative of opposition groups that the groundwork for a coup arose, as right wing media, politicians and wives of soldiers initiated a campaign against Prats for refusing to stage a coup, eventually forcing him to resign.[37] Yet even with Prats gone, US officials remained hesitant, with the Defense Intelligence Agency (DIA) referring to his replacement, Pinochet, as a man “unlikely to wield…authority and control.”[38] US officials argued over providing support to right-wing paramilitary forces, eventually compromising with the 40 Committee’s allocating $1 million to the effort, a fund that never reached Chile. Even with the decisive moment a day away on 10 September 1973, when a “key officer” of the Chilean army asked Washington officials for military support if difficulties arose during the next day’s coup attempt, the contacted officials were unwilling to commit.[39] Whereas the CIA failed to provide its $1 million, investors in Brazil, Argentina and Bolivia readily provided funds to the fascist paramilitary group Patria y Libertad that would fight against Allende’s forces on September 11th.[40]Additionally, the Brazilian military provided intelligence that proved to be crucial for the mutinous members of the Chilean military. Concerned that the Peruvian military would take advantage of an attempted coup in Chile to seize disputed territory on the Chilean-Peruvian border, retired Chilean admiral Roberto Kelly was sent to Brasilia in mid-August to exchange information with Brazilian officers.[41] As Kelly informed the Brazilian military of the Chilean conspirators’ plan, he received in exchange reliable intelligence on Lima’s intentions not to blindside the Chileans. It was the “green light” the Chilean dissenters were hoping for and that it was provided by a fellow Latin American military is proof against US omnipotence. Pinochet gave the order to depose Allende half a month later. Conclusion That US governmental decisions were not an end-all-be-all causal factor in Latin American Cold War relations is attested to by the diverse results of the policies it did or, even, did not undertake. Its participation in the invasion of Guatemala contributed to an undesired chain of events that led to a coup in Chile almost two decades later. Ironically, and despite early efforts to support Chilean coup plotters, the US government’s hesitancy to co-conspire with those very plotters at crucial junctures ended with, from US officials’ perspective, the serendipitous deposition of a Marxist leader. In the Guatemalan case, in which US policy did play a direct hand, the circumstances allowing for the CIA’s support were set up by a transnational, counter-revolutionary network of exiles and militants led by a trio of dictators who often acted outside the bounds of US desires. In Chile, it was non-US actors like investors from other Latin American countries and Brazilian military officers who provided the funds and intelligence needed for Pinochet to confidently overthrow Allende. But to say that US officials were entirely unaware of the role played by non-state actors and wholly naïve to the consequences of their own actions would be a mistake. In reference to the new literature, Max Paul Friedman attributes the turn to the “neglected half” of the Latin American Cold War in part to the use of Spanish- and Portuguese-language sources.[42] Yet some of the sources used in this essay were drawn up by US governmental organizations, and reveal an attention to non-state actors like Chilean women in the 1960s and admit to the self-defeating results of policies like the anti-Allende propaganda campaign that contributed to his eventual presidency. It is a testament to Latin American Cold War complexity that, despite carefully monitoring these different factors in the region, the US government was unable to live up to the role of puppet master attributed to it by the old historiography. Leandro Vargas Llosa has just completed an MA in European History at University College London (this essay was written during his time at the university). Full title when assigned: What can the new Latin American Cold War Historiography tell us about the Coups of Guatemala in 1954 and Chile in 1973? [1] Gilbert M. Joseph, ‘Border Crossings and the Remaking of Latin American Cold War Studies,’ Cold War History, 19 (2019), pp. 146-147. [2] Vani Pettiná, Vanni, Historia Mínima de la Guerra Fría en América Latina, 1st edition (Mexico City, 2018), p. 23. [3] Odd Arne Westad, The Global Cold War, 1st edition (Cambridge, 2005), p. 111. [4] Carlotta McAllister, ‘Rural Markets, Revolutionary Souls, and Rebellious Women in Cold War Guatemala,’ in A Century of Revolution, ed. by G. Grandin and G. Joseph (Duke, 2010), pp. 351. [5] Gerald K. Haines, CIA and Guatemala Assassination Proposals, 1952-4, CIA Historical Review Program, 1995, pp. 1-2. [6] Carlotta McAllister, p. 355. [7] Haines, pp. 5-8. [8] Ibid., p. 4. [9] Kenneth Lehman, ‘Revolutions and Attributions: Making Sense of the Eisenhower Administration Policies in Bolivia and Guatemala,’ Diplomatic History, 21 (1997), pp. 192-193. [10] Ibid., p. 199. [11] As cited in Lehman, p. 195. [12] Edward Sparks as cited in Lehman, p. 193. [13] Lehman, pp. 194-195. [14] Aaron Coy Moulton, ‘Building their Own Cold War in their Own Backyard: The Transnational, International Conflicts in the Greater Caribbean Basin, 1944-1954,’ Cold War History, 15 (2015), p. 140. [15] Moulton, p. 142. [16] Lehman, p. 2. [17] Moulton, p. 152. [18] Mark T. Hove ‘The Arbenz Factor: Salvador Allende, U.S.-Chilean Relations, and the 1954 U.S. Intervention in Guatemala,’ Diplomatic History, 31 (2007), p. 630. [19] Ibid., p. 633. [20] Ibid., p. 637. [21] Ibid., p. 655. [22] William A. Booth, ‘Historiographical Review: Rethinking Latin America’s Cold War,’ The Historical Journal, (2020), p. 10. [23] Tanya Harmer, ‘The Cold War in Latin America,’ in The Routledge handbook of the Cold War, ed. by A. Kalinovsky and C. Daigle (Abingdon: Routledge, 2014), p. 137. [24] Hove, p. 648-649. [25] As cited in Hove, p. 625. [26] As cited in Hove, p. 651. [27] U.S. Congress, Senate, Select Committee to Study Governmental Operations with Respect to Intelligence Activities, Covert Action in Chile, 1963-1973, Staff Report of the Select Committee to Study Governmental Operations with Respect to Intelligence Activities, 94th Cong., 1st sess., 18 December 1975, Washington D.C., p. 1. [28] Joseph, p. 148. [29] Margaret Power, ‘The Engendering of Anticommunism and Fear in Chile’s 1964 Presidential Election,’ Diplomatic History, 32 (2008), pp. 932-933. [30] Power, p. 942. [31] Power, p. 939. [32] As cited in Power, pp. 940-941. [33] U.S. Congress, Senate, Select Committee to Study Governmental Operations with Respect to Intelligence Activities, p. 19. [34] Ibid. [35] Ibid., p. 26. [36] Tanya Harmer, Allende’s Chile and the Inter-American Cold War, (North Carolina, 2011), p. 225. [37] Ibid., p. 227. [38] As cited in Harmer, p. 227. [39] Harmer, p. 239. [40] Ibid., p. 242. [41] Ibid., p. 220. [42] Max Paul Friedman, ‘Retiring the Puppets, Bringing Latin America Back in: Recent Scholarship on United States–Latin American Relations’, Diplomatic History, 27 (2003), p. 625.
- How important was Soviet support for Ethiopia's Derg regime?
The Derg,[1] or the Provisional Military Administration Council (PMAC), was the revolutionary military regime, led by Haile Mariam Mengistu, which ruled Ethiopia from 1974 to 1987. First assuming power during the 12 September coup which removed Emperor Haile Selassie from power,[2] the Derg’s initial anti-imperialist motivations were repressively transposed to one of staunch Marxism-Leninism, through Mengistu’s 1977-78 ‘Red Terror’, and the assassination of General Tafari Benti.[3] This paper supports the view that this transposition was one of self-interested convenience. Facing insurgencies in Eritrea, Tigray and the Ogaden, socioeconomic and agricultural crises, and the existential necessity of consolidating political power in a socialist transformation, Mengistu understood the desideratum of obtaining the “internationalist revolutionary solidarity” of the Communist Bloc.[4] Most important was the support of the Soviet Union, with Cuba, East Germany, and eastern Europe, playing lesser, varying roles in terms of assistance. This paper posits the view that the importance of Soviet support was invaluable to the Derg regime. In this view, the adoption of Marxism-Leninism, and its benefits, were of paramount importance to the Derg regime. To this end, this paper will proceed by providing an assessment, and historical context, of Soviet support to the Derg regime, through four dimensions: the political, the military, the economic, and the agricultural. Firstly, this paper will assess the levels of political assistance given to the Derg regime in its self-interested objective of implementing socialist transformation and consolidating power. Militarily, the impacts of Soviet and socialist support to Mengistu in both the Ogaden War, and the Eritrean and Tigrayan insurgencies will be examined. Ultimately, an economic and agricultural assessment will analyse the impacts of the Derg’semulation of Soviet-styled agricultural policies, namely nationalisation, villagisation, and resettlement. In doing so, the argument that Soviet support was crucial to the Derg regime in that it provided a model and alliance of Marxism-Leninism is evinced. To provide an assessment of Soviet support to the Derg, an overview of the historical context in which Soviet-Derg relations emerged is pertinent. The impetus for revolution in 1970s Ethiopia was born out of a desire to overthrow Haile Selassie and abolish imperialism, not one of ‘ideological conviction’.[5] As such, the Derg initially “lacked ideological uniformity”.[6] It was not until 1976 that the Derg first committed to the ideological “establish[ment of] socialist order through transformation”, under the Programme of the National Democratic Revolution (PDNR).[7] Traditionally, Ethiopia had relied upon the United States for its military and economic support, as Somalia had upon the Soviet Union. For example, between 1951-1976, Ethiopia received over $629 million in combined military and economic aid from the United States.[8] In 1962, the Somalian army was aided $32 million by the Soviet Union, and was also sufficiently equipped militarily by both the Cubans and Soviets.[9] However, when Somali forces invaded the Ogaden region of Ethiopia in the summer of 1977, motivated byirredentism, President Said Barre’s government ‘abrogated’ its Soviet friendship treaty.[10] This culminated in the cessation of weaponry supplies, crucial to Somalia’s power.[11] At the same time, US- Ethiopian relations became strained, eventually breaking down, due to an incompatibility between President Carter’s foreign policy emphasis on human rights, and Mengistu’s clear violation of such, in the ’Red Terror’ opposition repression.[12] This led to the ironic situation in which superpower allegiances in the Horn reversed.[13] As will be demonstrated, the convenience of the Soviet Union’s interest in replacing American influence in Ethiopia, and the ideological model of Marxism-Leninism it offered, played into the self-interested hands of Mengistu. The appeal to Mengistu for adopting a Marxist-Leninist ideological conviction was two-fold: on the one hand, Marxism-Leninism provided the Derg with a tried-and-tested formula of consolidating political power throughout Ethiopia, without the stifling requirement of democratic procedure.[14] On the other hand, ideological alignment with the Soviet Union offered the Derg the political, economic, military, and infrastructural support it required, through the ’internationalist’ solidarity of the Communist bloc. In May 1977, Mengistu met with Leonid Brezhnev, Nikolai Podgorny and other officials to sign a ’Declaration of Friendship’, after receiving Soviet support for his ’Red Terror’, the aggressive political suppression that solidified his place as head of the Derg.[15] The declaration was the first explicit conferment ofSoviet support for the Derg, in respect to Mengistu’s commitments to aid a ”successful socialist transformation.”[16] Politically, the Derg was faced with the necessity of consolidating the regime’s power, and in doing so, finding the means to enact a successful ‘socialist transformation.’ To sanction such socialist transformation required aligning with the Communist bloc, the emulation of an established socialist model, and the creation of a vanguard party.[17] The Soviet Union, more so than Maoist China, held the answers for Mengistu. It is important to note that Chinese Maoism had been disregarded by the Derg, particularly between 1975 and 1976, when the regime discredited the ‘indigenous socialist’ Ethiopian People’s Revolutionary Party (EPRP) for its “Maoist intentions”.[18] Revolutionary Ethiopia expected that alignment with the Soviets and the Communist bloc would culminate in aid ’flowing freely‘ to the regime, “result[ing] in rapid economic development”, and provide the means for consolidating political power.[19] This expectation, as will be evidenced, was not unfounded. Soviet political support provided to the Derg came in two forms: the active support of the Soviet Union for the creation of a Marxist-Leninist vanguard party to consolidate power (and the alignment with the Communist bloc) and the passive provision of Marxism-Leninism as an ideological framework for the Derg to implement revolutionary modernizing change, to overturn the imperial legacy.[20] The Soviets recommended that in order to secure the establishment of the Derg’s dominance over the state and the obtainment of victorious socialist transformation, the creation of a Marxist-Leninist vanguard party was required. It was not until December 1979 that preparations for the establishment of a vanguard party began, with the founding of the Commission to Organize the Party of Working People of Ethiopia (COPWE).[21] The establishment of COPWE, as attested by Peter Schwab, was the culmination of increased Soviet and Cuban demands for the Derg to assume ”greater orthodoxy”.[22] Self-interested convenience, however, first appears here in that Mengistu benefitted from the stability, and means to coordinate the revolution, provided by a vanguard party, whereas the Soviets gained little more than a more sturdy, socialist-orientated ally. Almost five years after the establishment of COPWE, and ten years to the date of the removal of Haile Selassie, the Worker’s Party of Ethiopia (WPE) was founded on 12 September 1984. As the official Ethiopian Marxist-Leninist vanguard party, the WPE had modelled itself closely on Soviet recommendations ”provid[ing] the [Derg] with an effective organization for monitoring and enforcing national policies, both regionally and locally”.[23] The ideological and regime-style affinities helped Mengistu’s Derg to centralize political administration, enabling the modernization of Ethiopia. In return for merely siding with the Soviet Union ideologically, Mengistu obtained billions of dollars in funding and aid, an alliance of ”international solidarity” with socialist states, and the Marxist-Leninist model of which to strengthen his originally fragile government.[24] In terms of alliance, within the bipolar context of Cold War international relations and the socialist premise of ”internationalist solidarity”, the alignment of Ethiopia with the Soviet Union also meant alignment with the Communist bloc: Bulgaria, Cuba, East Germany, Poland, Hungary etc. As will be particularly evident in the later assessment of Soviet responses to the 1983-85 famine, this ’solidarity’ would prove vital to the Derg, in the face of Soviet inaction. The passive support provided by the Soviet Union, that of the provision of Marxist-Leninist ideology, is, in this paper’s view, more important than Soviet support itself – for it provided a model of rigid government, and a strong international alliance. Christopher Clapham shrewdly identifies four benefits to Mengistu’s adoption of Marxism-Leninism: the provision of a ‘doctrine of revolution, development, and nation-building, and a source of international support.’[25] The Derg initially lacked ‘ideological unity’ and did not possess the means to successfully implement radical societal change in the revolutionary era. Original Derg policies, such as the ‘ten-point programme’, contained a combination of “socialist and nationalist” facets, which did not subscribe rigidly to Marxist ideology.[26] By subscribing to Marxism-Leninism, Mengistu was provided with a relatable, tried-and-tested means of enacting revolution, and eradicating the political-economic ’power of the Ethiopian aristocracy’.[27] The similarities between the Russian and Ethiopian revolutions, in their overthrows of imperial autocrats, peasantry uprisings, and subsequent utilisation of brutal ’red terrors’,[28] are stark. Developmentally, the Soviet model of revolution provided the Derg with the means to further their modernizing land reform plans, such as nationalization and the creation of peasant associations, which Anderson-Jaquest argues ”bore a resemblance to [the] tactics advocated by Lenin’s [initially] weak government after the Russian revolution.”[29] This paper will examine the importance of Mengistu’s Soviet-style agricultural policies in detail, when assessing the economic and infrastructural dimensions of Soviet support. As a ’doctrine of nation-building', the Derg was provided with an established model of politically uniting multiple ethnicities under the uniformity of Marxism-Leninism.[30] In the Soviet Union, Marxism-Leninism acted as the ideological adhesive between Russians, Ukrainians, Georgians, Kazakhs, Uzbeks, and Tajiks, to name just a few. Mengistu would have been reassured by this, in facing the issue of unifying Eritrean, Tigrayan, and Western Somali ethnic separatists under Marxism-Leninism, providing the means to consolidate a multi-ethnic, centralized state. This was particularly important given the motivation of many Derg members to protect and enshrine the national unity of Ethiopia.[31] In view of this assessment, the necessity of Soviet support, both actively and passively, has been evidenced. In establishing a revolutionary Marxist-Leninist state, governed by a strong vanguard party, and the alignment of Ethiopia with the Communist bloc, the Derg would become the direct beneficiary of billions of dollars of aid and equipment, Soviet emulation models, and solid ideological direction, indisputably consolidating its power. This consolidation of political power can be evidenced by the Derg’s successful civilianization into the People’s Democratic Republic of Ethiopia in 1987, with Mengistu remaining as leader until 1991. Without these political provisions, Soviet support economically, militarily, and infrastructurally would not have been permissible. Militarily, three areas of Soviet support required by the Derg can be identified. Firstly, in the aftermath of deteriorating US-Ethiopian relations, the importance of substituting American military assistance for that of the Soviets was paramount. This was consequential of the Derg’s commitment to restructuring the government based upon Marxist-Leninist interpretations in December 1976, and the subsequent foreign policy of President Carter, which was vehemently opposed Mengistu’s human rights violations.[32] Secondly, Soviet military assistance provided to the Derg in the Ogaden War, the impetus for Soviet- Ethiopian relations, demonstrates the effectiveness of the Soviet military support provided. However, ultimately, when assessing the Soviet impact on the Eritrean and Tigrayan insurgencies, the military limitations of Soviet support are demonstrated. Alignment with the Soviet Union, in the absence of American support, was invaluable to the Derg in securing the military capabilities it required. Such capabilities were needed to repel domestic insurgencies, in both Eritrea, and Tigray, and to suppress the irredentist nature of Somalia in the Ogaden. From the onset of the Ogaden War until 1985, the Soviet Union supported the Derg regime with over $10 billion in military aid and assistance.[33] It was not until the ascension of Mikhail Gorbachev and his foreign policy reform, that this substantial military aid decreased.[34] Soviet military support from 1978-1985, compared with the $279million of military aid the United States provided from 1951-1976, demonstrates the invaluableness of replacing American support promptly with that of the Soviet Union. In over half the tenure of American influence, the Soviets increased their military funding by over 3400 per cent, roughly $1.4 billion extra per annum on average. In securing victory in the Ogaden War, the military support offered by the Soviet Union and Cuba was fundamental. East German technical support is also of considerable note. The Soviets met with Mengistu in May 1977, two months before the Somali invasion of the Ogaden, to agree on a classified military agreement. The effects of such an agreement supplied the large peasantry with advanced Soviet weaponry.[35] Upon the Somalian abrogation of its Soviet friendship treaty on 13th November 1977, an extensive airlift of Soviet weaponry, equipment, and personnel was initiated. The airlift, as Brind evidences, included 1,500 Soviet officials, around 2,000 East German technicians, and 13,000 Cuban troops.[36] East German support was limited to the ideological education of the military and the police. During the conflict, up to March 1978, the Soviet Union delivered an estimated $1.5 billion of military equipment, “including T-54 tanks and SAM-7 flight missiles“,[37] enabled a decisive Ethiopian repulsion of Somali invaders.[38] However, despite such Soviet and socialist military support enabling Ethiopian victory in the Ogaden, it inadvertently strengthened Eritrean and Tigrayan insurgencies. The very nature of the military regime was to see military action as the resolution for all political disputes. Given the extent of Soviet military and political support, and the strengthening of Ethiopia’s armed forces, Mengistu’s resolve for a continuation of Soviet-aided military incursions was reinforced. Mengistu launched a counteroffensive against the Eritrean People’s Liberation Front (EPLF) in October 1984, equipped with “440,000 active troops, over 750 tanks, and 130aircraft”, as noted by DeWaal.[39] No decisive victories were maintained by the Derg, but in this view, they doomed their counterinsurgency ambitions. By using, and ultimately sacrificing, Soviet weaponry, the EPLF became strengthened by the capture of Soviet equipment and ’spoils of war’.[40] As was the case with the Tigrayan People’s Liberation Front (TPLF), Mengistu’s refusal to seek political means persistently emboldened the very insurgents he sought to repress. Admittedly, this is an indirect consequence of Soviet military support, however it serves to detract from the unnuanced argument that Soviet support was wholly effective and important to the Derg. Economic and agricultural dimensions of Soviet support to the Derg can be seen to be synonymous, as most infrastructural-agricultural projects were undertaken with the intention of achieving socialist economic mutual benefit.[41] The agricultural crisis facing the Derg, was the Great Famine of 1983-85. Ironically, this was caused by the Derg’s implementation of Soviet recommendations on agricultural reform, and the Derg received futile aid in the face of Soviet-caused disaster. However, the ‘internationalist solidarity’ of the socialist bloc, and the aid of the West, helped to offset these negative implications, suggesting the adoption of Marxism-Leninism was more important than Soviet support, for the Derg regime. The Derg originally implemented socialist, nationalist, and ‘agrarian’ reforms after the removal of Haile Sellasie in September 1974.[42] After the ’Red Terror’ of 1977-78 with Mengistu’s alignment with Marxism-Leninism, his agricultural policy became centered upon Soviet models, in an attempt at modernization in post-revolutionary Ethiopia. These Soviet models for meaningful socialist land reform included: ’nationalisation‘, ’modernisation‘, ’resettlement’, and ’villagisation.‘[43] The Derg passed the ’PublicOwnership of Rural Lands Proclamation’ in 1975,[44] nationalising all rural lands, commercial farms, establishing ’peasant associations’, and giving the Derg the powers of exacted resettlement for villagization.[45] Moreover, these Soviet-styled reforms detrimentally exacerbated the agricultural crises Ethiopia had been subjected to oft before – total militarised state control over the food supply and its production left the focused Derg completely reliant upon external humanitarian support when famine hit in 1983. Images of starving children filtered back to the West, triggering an international humanitarian response by late 1984. The United States alone donated $160 million in food aid.[46] This suited the Soviet Union’s interests of allowing the West to supply aid because ”third world economic dislocations are a heritage of [Western] colonial...policies.”[47] The Soviet Union provided the Derg over $10 billion in military aid between 1978-1985, but offered little more than $3 million in food aid in 1984.[48] The divergence in the importance of Soviet support and Marxist-Leninist support becomes evident, in the responses to the famine. The Eastern socialist bloc contributed over $25 million in food aid to the Derg, with Zhivkov’s Bulgaria donating $12.7 million, alone.[49] This demonstrates two things: firstly, despite the importance of Soviet support politically and military, the Soviet Union relatively abandoned the Derg in its hour of need - a considerable limitation of such support. Secondly, in the absence of Soviet action, the Communist bloc did support the Derg substantially, in an act of internationalist solidarity, suggesting the adoption of Marxism-Leninism bore greater significance to the regime than Soviet support alone. In conclusion, an assessment of the extent to which Soviet support was important to the Derg regime leads one to identify such support as both explicit—the action of Soviet support—and implicit—the provision of, and alignment with, Marxist-Leninist ideology. Whilst explicit Soviet policies of assistance in political establishment, military support, and the provision of revolutionary agricultural models, had an undeniable influence upon the Derg regime, the view that such support was conditionally pinned upon the subscription to Marxism-Leninism, is most convincing. It can also be argued that the ideological conviction of the Derg was fundamentally rooted within its own self-interest. The Soviet Union provided the Derg with a relatable, anti-imperial revolutionary model, which proved decisive to the Derg’s consolidation of power, creation of a state vanguard party and crucial military successes in the Ogaden. However, this otherwise reliable, consistent support was not supplied in the Derg’s hour of need. The lack of a Soviet humanitarian response to the 1983-85 famine, and the premise that such famine was, in part, consequential of Soviet recommendations, detracts from the view Soviet support was entirely fundamental to the Derg. It does, however, prove Marxism-Leninism to be the foundation of important support to the Derg. Marxism-Leninism had provided the Derg with the crucial political blueprints for consolidating power and fruitfully resulted in alignment with the Soviet Union, and subsequent receipt of billions of dollars in assistance. It also, crucially, ensured the Derg regime was secured in ‘internationalist solidarity’ with its ideological allies, demonstrated by Soviet-Cuban support militarily, and vital Eastern bloc humanitarian aid provided in the absence of Soviet assistance. Explicit or implicit, it is indisputable that Soviet interest, support, and the provision of Marxism-Leninism, was of imperative self-interested importance to the Derg. Will Kingston-Cox has just finished his second year of a BA in History and Politics at Warwick University. Notes: [1] Amharic for ‘committee’, see Harry Brind, ‘Soviet Policy in the Horn of Africa’, International Affairs (Royal Institute of International Affairs 1944-), Vol. 40, (Winter, 1983-1984): p. 90. [2] Angence France Prese, ‘Ethiopia Military Assails Emperor’, The New York Times, Sept. 12, 1974, p. 81 https://www.nytimes.com/1974/09/12/archives/ethiopia-military-assails-emperor-he-also-loses-support-of-church.html (last accessed 19 February 2022) [3] Jacob Wiebel, ‘”Let the Red Terror Intensify”: Political Violence, Governance, and Society in Urban Ethiopia, 1976-78', InternationalJournal of African Historical Studies, Vol. 48, (2015), pp. 13-27; led the Derg from November 1974 – February 1977 [4] Jiri Valenta, ‘Soviet-Cuban Intervention in the Horn of Africa: Impact and Lessons’, Journal of International Affairs, Vol. 34, No. 2, (1980-81), p. 361 [5] Brind, ‘Soviet Policy in the Horn of Africa’, International Affairs (Royal Institute of International Affairs 1944-), Vol. 40 (Winter, 1983-1984), p. 90 [6] Edmond J. Keller, ‘Drought, War, and the Politics of Famine in Ethiopia and Eritrea’, The Journal of Modern African Studies, Vol. 30, No. 4, p. 611 [7] Mulatu Wubneh and Yohannis Abate, Ethiopia; Transition and Development in the Horn of Africa (Boulder, Col.: Westview Press,1988) pp. 205-210, in Tommie Crowell Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship: A Case Study in Asymmetric Exchange” (unpublished PhD, University of London, 2002), p. 63 [8] Brind, ‘Soviet Policy in the Horn of Africa’, p. 91. [9] Steven David, ‘Realignment in the Horn: The Soviet Advantage, International Security, Vol. 4, No. 2, p. 72; Jiri Valenta, ‘Soviet-CubanIntervention in the Horn of Africa: Impact and Lessons’, Journal of International Affairs, Vol. 34, No. 2 (1980-81), p. 353. [10] Valeta, ‘Soviet-Cuban Intervention’, p. 95. [11] Ibid, pp. 94-96 [12] Keller, ‘Drought, War, and the Politics of Famine', p. 613; Wiebel, ‘”Let the Red Terror Intensify”', pp. 13-27 [13] Valenta, ‘Soviet-Cuban Intervention in the Horn of Africa', p. 353 [14] Paul B. Henze, ‘The Ethiopian Revolution: Mythology and History’, Northeast African Studies, Vol. 12, No. 2/3 (1990), pp. 3-4 [15] ’Ethiopia and Soviet Sign Agreements on Closer Tie’, The New York Times, May 7, 1977, p. 3 Available at: [Accessed 19 February 2022] [16] Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship', p. 66. [17] Ibid, pp. 9-10 [18] Colin Legum and Bill Lee, Conflict in the Horn of Africa (London: Rex Collings,1977), p. 44, in Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', p. 66 [19] Henze, ‘The Ethiopian Revolution', pp. 3-4. [20] Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship', p. 69. [21] Ibid, p. 67. [22] Peter Schwab, ‘Political Change and Famine in Ethiopia’, Current History, Vol. 84, No. 502, North Africa (May 1985), p. 222. [23] Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', p. 69 [24] Diana L. Ohlbaum, ‘Ethiopia and the Construction of Soviet Identity, 1974-1991', Northeast African Studies, Vol. 1 (1994), pp. 63-89 [25] Christopher Clapham, ‘The socialist experience in Ethiopia and its demise’, The Journal of Communist Studies and Transition Politics, Vol. 8, No. 2, (1992), pp. 107-110 [26] Christopher Clapham, Transformation and Continuity in Revolutionary Ethiopia (Cambridge: Cambridge University Press, 1988), pp. 45-46, in Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', p. 62. [27] Ibid. [28] Sergei Melgunov, ’The Record of the Red Terror’, Current History (1916-1940), Vol. 27, No. 2, (1927), pp. 198-205. [29] Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', p.184. [30] Clapham, ‘The socialist experience in Ethiopia and its demise’, pp. 107-110. [31] Ibid, p. 108 [32] Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship"', pp. 104-106 [33] Reuters, ’Soviets Pull Out Advisers at Ethiopia Fronts’, The New York Times (22nd March 1990), Sect. A, p. 12 Available at: [Accessed 19 February 2022] [34] Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', pp. 66-76 [35] Brind, ‘Soviet Policy in the Horn of Africa’, p. 85 [36] Ibid. [37] Valenta, ‘Soviet-Cuban Intervention in the Horn of Africa', p. 363. [38] Ibid, p. 93. [39] Alex DeWaal, Evil Days: Thirty Years of War and Famine in Ethiopia (New York: Human Rights Watch, 1991), p. 182, in Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship"', p. 118. [40] Anderson-Jaquest, “Restructuring the Soviet-Ethiopian Relationship"', p. 118. [41] Ibid., p. 66. [42] Clapham, Transformation and Continuity in Revolutionary Ethiopia, pp. 45-46, in Anderson-Jaquest, '“Restructuring the Soviet-Ethiopian Relationship"', p. 62; Anderson-Jaquest, “Restructuring the Soviet-EthiopianRelationship", p. 175. [43] Ibid, p. 175 [44] "Public Ownership Of Rural Lands Proclamation No. 31/1975.", Ecolex.Org, 2022 [Accessed 1 March 2022] [45] Clapham, ‘The socialist experience in Ethiopia and its demise’, p. 107. [46] Schwab, ‘Political Change and Famine in Ethiopia’, p. 223. [47] Ibid. [48] Ibid, pp. 222-223. [49] Ibid, p. 223.
- Review: E.P. Thompson's Customs in Common
Customs in Common consolidated E. P. Thompson’s renown as a brilliantly original historian, provocative and passionate though always heavily evidence-based. Poised alongside The Making of the English Working Class, this volume of essays sought to redress historical perception of the 18th century as a time of declining customary usages[1], instead emphasising the robustness of the customs of working people in spite of advancing capitalism. This challenge provoked controversy, shifting the nexus of debate from workers’ living standards, to capitalism’s culturally disruptive power[2]. The shift was led by the essays ‘Time, Work-Discipline and Industrial Capitalism’ and ‘The Moral Economy of the English Crowd in the Eighteenth Century’[3]. The latter acquired a new buttress of defence in this volume, in the form of ‘The Moral Economy Reviewed’, having been challenged since its original publication[4]. These essays are the focus of this review, in recognition of their exceptional contributions to the study of 18th-century English working life and customs. ‘The Moral Economy’ is perhaps the most consequential essay of the volume. Thompson riles against reductionism from previous historians who explained away, instead of sufficiently analysing, causes of food riots as compulsive responses to economic stimuli. Instead, he shows that while economic factors could trigger insurrection, crowds had rational, “legitimising notions”, informed by a “moral economy” – encompassing customary social and economic norms and responsibilities. The paternalist tradition of market regulation in times of dearth declined in the face of a new laissez-faire political economy and repression from authorities. Nevertheless, working people defended this paternalism, resisting free-market capitalism as a crowd[5], enforcing their customary protection, including price setting, generally, Thompson claims, non-violently. The essay provoked much criticism, to which Thompson relatively comprehensively responded in ‘Moral Economy Revisited’. He covers historians’ views from those he labels ‘positivists’ to ‘modernisation theorists’ to those who fundamentally misunderstood his writing. In his rebuke, however, Thompson’s wit at times strays into discourtesy, using needlessly colourful retorts, drifting into the personal[6]. Nonetheless, this does not detract from his engaging writing nor the impressive defence of his argument. Thompson dedicates much time to undermining economic historians, some of whom propose a notion of exclusive and direct causality between economic factors and riot[7]. He utilises his breadth of reading, neatly employing international comparisons (such as Ireland and India) to underline the importance of contextual and cultural considerations[8]. Thompson also dismantles economic arguments he believes too theoretical, which defend the logic and morality of Adam Smith’s laissez-faire. Rather, he emphasises laissez-faire’s deficiencies in solving scarcity, instead showing its role in deepening these crises. Nevertheless, Thompson’s response to criticism is not faultless. He fails to adequately address disagreements over the nature of the food riots. Many have argued that rather than embodying a rejection of free-market capitalism, the riots represent retaliation for perceived exploitation. Thompson does not address the time-lag between the implementation of market mechanisms, declining paternalism, and outbreaks of rioting[9]. Nor does he address the persuasive claim that price setting was symptomatic of a battle fought within the capitalist structure, not against it[10]. Price rioters were not, regardless of economic context, demanding customary prices. They considered and accepted changes dictated by harvest conditions and inflation[11]. Thompson’s static representation of the demands of workers is at odds with his own definition of custom being “in continuous flux”[12]. He provides description only of complete customary shift in the moral economy as pressure upon wages increased in the 19th century[13]. Thompson’s depiction of food riots as non-violent is also problematic. There is evidence that riots were as often disorderly as orderly. One sample of 128 food riots in England (1790-1810) shows 93 cases of foodstuff being seized or damaged, 83 cases in which it was sold or negotiated and 43 cases which contained both[14]. In portraying capitalism as largely destructive, apathetic to customs and opposed by non-violent workers he underplays violence and overplays the degree of system rejection. ‘Time, Work Discipline and Industrial Capitalism’ was less controversial[15], though just as consequential, revising historical understanding about the nature of work and how workers were acclimatised to the capitalist work system. Thompson details the shift from task-orientated to more regimented work patterns resulting from the development of industrialisation, new technologies (particularly the clock) and theologies, schooling and the suppression of fairs and sports[16]. The result was the transformation of human perception of time: “Time is now currency: it is not passed but spent”[17]. Though this process met resistance, it was gradually directed not against the new work-time but about it, pressing for reductions in hours, rather than abandonment of hourly regimentation. The essay has not been without criticism, particularly from Glennie and Thrift who argue that Thompson’s conceptualisation of time as unitary ignores temporal complexity[18], evident in differences in perception between communities and cultures. However, they overstate Thompson’s specificity. Thompson did not argue that everyone perceived time and experienced the shift in work-time in the same manner. He details general perceptions of time concerning work and its general shift. While Thompson argued the centrality of factors including industrialisation, he did not argue that others could not affect time-structure. Glennie and Thrift also misunderstand Thompson’s stance on religion. To state that he believed that religion had a ‘narrow influence’ simply affecting intellectual culture rather than working people[19] is to misinterpret the essay and to presume a complete shift from his earlier work on Methodism[20], which powerfully explain the psychologically formative role of religion in shifting work-ethos. Furthermore, they do not attempt to critique Thompson’s explanatory factors, they simply add others thus posing more questions than they answer. Thompson’s work on time and work discipline thus retains its explanatory power and contemporary relevancy, despite challenges. Customs in Common is highly significant, revising the historical understanding of 18th-century society – despite disagreements over extent, it engendered an enduring consideration for cultural complexities. While Thompson’s arguments are not always entirely defensible, one can rely on them being incredibly well-supported and engagingly written. Anonymous Notes: [1] E. P. Thompson, Customs in Common (London, 1991), p. 1. [2] Nicholas Rogers, ‘Plebians and Proletarians in 18th-Century Britain’, Labour/Le Travail, Vol. 33 (Spring 1994), p. 254. [3] Henceforth, ‘The Moral Economy’. [4] Thompson, Customs in Common, pp. ix-x. [5] Thompson, Customs in Common, p. 9. [6] For Example: Thompson, Customs in Common, p. 272 (“fat-headed notions”) and p. 260 (“thick-headed”). [7] Thompson, Customs in Common, p. 262. [8] Ibid., p. 269. [9] John Stevenson, ‘The ‘Moral Economy’ of the English Crowd: Myth and Reality’ in Anthony Fletcher (ed.), Order and Disorder in Early Modern England (Cambridge, 1985), p. 237. [10] John Stevenson, ‘Review of Customs in Common by E. P. Thompson’, The English Historical Review, Vol. 108 (April 1993), p. 408-409. [11] Ibid. [12] Thompson, Customs in Common, p. 6. [13] Ibid., p. 249. [14] John Bohstedt, ‘The Myth of the Feminine Food Riot: Women as Proto-Citizens in English Community Politics, 1790-1910’, in Harriet B. Applewhite and Darlie G. Levy (eds.), Women and Politics in the Age of Democratic Revolution (1990), p.59 cited in John Bohstedt, ‘The Moral Economy and the Discipline of Historical Context’, Journal of Social History, Vol. 26 (Oxford, 1992), p. 274. [15] Paul Glennie and Nigel Thrift, ‘Reworking E. P. Thompson’s “Time, Work-discipline and Industrial Capitalism”’, Time & Society(October 1996), p. 277. [16] Thompson, Customs in Common, p. 394. [17] Ibid., p. 359. [18] Glennie and Thrift, ‘Reworking Thompson’s “Time, Work-discipline and Industrial Capitalism”’, p. 276. [19] Ibid., p. 283. [20] E. P. Thompson, The Making of the English Working Class (London, 2013), Chapter 11.
- Mediation and Facilitation: Peace Talks in the Arab-Israeli War
Any peace process involving the presence of a third party raises questions as to the nature and extent of this participation. With reference to Hilde Henriksen Waage’s conception of mediator and facilitator,[1] this essay seeks to differentiate between the roles played by Presidents Carter and Clinton, and Norwegian Academics Mona Juul and Terje Rød-Larsen, in the Arab-Israeli negotiations at Camp David in 1978 and 2000, and in Oslo in 1993. From Carter’s active mediatory approach that bridged the gap between Egypt and Israel culminating in two distinct agreements, to the success of Norway’s neutral facilitation in establishing The Declaration of Principles, the substantial positive influence of third-party involvement will be illuminated. Despite this however, this essay seeks to demonstrate that external participation can only achieve so much. As a result of the implications of inflexible parties, geopolitical loyalties and limited domestic capabilities, even the steps taken in 1978 and 1992 were small ones. Highlighted most aptly in Clinton’s failure to achieve any agreement in 2000, without the right environment, mediation does not guarantee a positive outcome. In this, third-party involvement is best understood as a small element of a set of wider criteria necessary for success when attempting to induce peace. CAMP DAVID: 1978 – The Mediation of Jimmy Carter In 1978, secret negotiations mediated by King Hassan II of Morocco between Israel and Egypt reached a deadlock over control of the West Bank and the Palestinian right to self-determination. Believing their failure lay in the lack of authoritative parties in attendance, US President Carter extended an invitation to Menachem Begin and Anwar Sadat to move forward at Camp David.[2] In examination of the active nature of his mediation, this section will highlight the significant role played by the United States and its President in creating the structure, environment, and political pressure necessary to bring about two unprecedented agreements: a framework for bilateral peace between Egypt and Israel, and a wider plan for a ‘just, comprehensive and durable settlement’ across the Middle East.[3] From the wooded, retreat-like setting of Camp David that encouraged a degree of camaraderie between adversaries,[4] to the privacy such a location enabled (a strict control of the flow of information to outside media was maintained),[5] the United States provided the environment in which progress could be made. Where previous negotiations had consisted of informal discussions between low-level politicians, Camp David offered the first opportunity for state-leaders to convene in the same location with a single purpose. Beyond these geographic considerations, it was Carter’s personal conceptualisation of his role as a ‘full partner’ in negotiations that created the structure responsible for their success.[6] From his encouragement of an international foundation in his support for reference to UN Resolutions in both frameworks, to his detail-orientated approach that saw him negotiating into the early hours,[7] annotating large-scale maps of the Sinai by hand,[8] and editing every draft of the accords himself,[9] Carter’s hands-on approach and diplomatic stamina ensured sustained progress. The importance of this is evidenced in the events following the breakdown of negotiations between Sadat and Begin over the resolution of the Sinai. Had the two parties been in discussion without the presence of an external mediator, it is unlikely the process would have continued. Owing to both Carter’s shift towards shuttle-diplomacy between the two delegations, and, more importantly, the considerable political influence of the United States (with both financial aid at stake and the threat of broken relations with a global superpower explicitly articulated), neither side were willing to walk away.[10] While academics like Tom Princen have attributed Carter’s commitment to a personal adherence to ‘moral standards’,[11] the reality is more complex. A 1978 June poll found that 42% of Americans believed their President not to be ‘in control’ of US political affairs.[12] On the basis of a desire to redefine this reputation, Carter initiated Camp David. Although the 1978 negotiations culminated in two agreements, the vague language (no attempt was made to define Palestinian ‘autonomy’) and non-committal time frames in both can partly be attributed to the nature of US mediation. From a realisation that in order to conclude both frameworks, Begin could not be pressured into a total freeze on settlements,[13] to the loyalties at play in Washington (the American Israel Public Affairs Committee consistently lobbied the White House against consideration of a Palestinian homeland),[14] Carter chose to allow ambiguities in order to adhere to domestic concerns. Understanding the positive implications for his legacy on securing a peace agreement in the Arab-Israeli conflict, sacrifices were made. Consider this alongside US involvement’s impact on the absence of both the PLO (dismissed on the grounds of their rejection of Resolution 242),[15] and Jordan (tense US-Jordanian relations meant they were not present to clarify the Palestinian responsibility awarded to them in the Framework for Peace in the Middle East ),[16] and the role played by external mediation in contributing to the limitations of Camp David is highlighted. Without third-party involvement, the progress made in 1978 would not have occurred. Carter’s participatory mediation and the presence of a strong political power like the US forced both Sadat and Begin to remain engaged in the peace process despite their differences. A logical consequence of such active involvement, however, is that the domestic requirement to succeed politically took priority over ensuring the details of Palestine’s future. OSLO: 1993 – Norwegian Facilitation The decades following the 1978 Camp David talks were largely ones of disappointment. While the frameworks agreed under Carter offered an important step, the details of the peace in question remained highly contested. Commencing in 1993 and spanning fourteen sessions over an eight-month period, the Oslo track emerged as an attempt to overcome the deadlock reached in Madrid over Israeli opposition to PLO involvement.[17] In discussions between Academic Yair Hirschfeld, Norwegian social scientists Terje Rød-Larsen and Mona Juul and PLO Treasurer Ahmed Qurei (Abu Ala),[18] conversations in Oslo offered a stark contrast to the tense formalities of US mediation at Camp David (and to an extent, discussions in Washington during this time). Instead, the Oslo process symbolised a period of open conversation and gradual progress encouraged by a loose and neutral Norwegian facilitation. Although the methods of their external involvement were significantly different, however, the similarities between negotiations in Oslo and Carter’s 1978 Camp David are numerous. While some of this comparison takes a positive form in 1993’s success in creating codified Accords resemblant of the frameworks produced fifteen years prior (including a mutual recognition between Israel and the PLO in the form of letters between Yasser Arafat and Yitzhak Rabin, and a Declaration of Principles setting an agenda for negotiations on Palestinian self-government),[19] both processes are also united in their issues. The section that follows thus seeks to highlight that although a less heavy-handed facilitatory approach had its benefits, external mediators in Norway and the agreements they helped produce were equally as limited as those that preceded them. Hilde Henriksen Waage clarifies the distinction between mediator and facilitator. Where Jimmy Carter adopted an approach of active involvement, Waage stresses that the Norwegian academics involved in the early stages of the Oslo process were considerably more removed from the details of negotiations. Their facilitation involved arranging practicalities, from booking flights and hotels, to arranging opportunities for informal contact between both parties, academics like Juul and Larsen sought the creation of a neutral and informal environment designed to encourage collaboration.[20] This ‘gentle’ approach meant Norwegian facilitators could establish ground rules (key tenets like the mandating of total secrecy and the prohibition of dwelling on past grievances) in a manner that lacked the sense of threat and coercion at play when the third-party in question occupies a more powerful political position globally. While negotiations in 1978 had also operated on the basis of secrecy, although the details of Camp David were unbeknownst to the public, the attention of the world had remained fixated on the closed doors of its eleven cabins throughout. In contrast, in its origins as grass-root discussions on theory and principles between academics as opposed to state leaders, the Oslo Process afforded its participants an unprecedented degree of privacy and freedom. Creating an atmosphere that allowed both parties to explore the other’s respective position without the commitment or legal implications involved in formal negotiations between Israeli officials and the PLO,[21] Norwegian facilitation sought to establish a micro-level of trust between adversaries that would eventually translate on a wider stage.[22] While the nature of the Declaration of Principles that followed serves to highlight the somewhat flawed logic involved in this suggestion, without the ‘Oslo Spirit’ of friendliness and the diplomatic progress it inspired, it is unlikely that Arafat and Rabin would have found themselves in a position where mutual recognition was even a possibility. While the sense of equality and camaraderie created by Norway’s relatively insignificant political reputation had a positive impact on encouraging open discussion between Israel and the PLO, their limited domestic influence can also be used to understand the flawed nature of the Declaration of Principles it created. Norwegian academics, acting as informal spokespeople for a neutral facilitating party, did not have access to the same tools available to the USA in 1978. Unable to provide the financial aid or political threat that inspired commitment in Begin, Norway lacked the means with which to apply pressure on the Israeli delegation, a fact that became even more apparent with the formalisation of their representation and the arrival of Uri Savir in May. Described by Waage as a fatal flaw in the facilitative approach, the sense of ‘equality’ this method fosters thus exists only on a superficial level – in reality, the potential of a powerless facilitator is no more than the strongest party will allow.[23] In practice, this dynamic revealed itself in the Norwegian tendency to bow to the demands of Israeli representatives. From pressuring Arafat into committing to political decisions over the telephone in broken English, to the almost sole focus of Norway’s spokespeople on encouraging flexibility in Abu Ala,[24] external participation in the Oslo Process suffered from a pro-Israeli bias. Academics like Avi Shlaim have applauded the Declaration of Principles as the ‘mother of all breakthroughs in the century-old conflict’,[25] however, Norway’s inability to apply any real pressure on Israel means the reality is quite different. Reminiscent of its 1978 predecessor, the DOP offers the same ambiguities arguably to be expected from a document outlining ‘principles’ as opposed to a detailed plan of action. From the addition of appendixes clarifying that decisions on withdrawal will be subject to negotiation with Israel, to its deferral of key questions like the division of Jerusalem to future final status talks,[26] the document consistently fails to address the most contentious issues. Although in 1978 this tendency towards ambiguity and non-committal clauses can be traced to US political concerns, and in Norway is instead partly the result of a facilitating power unequipped to tackle the disparity between adversaries, in both cases the outcome was the same. With third-party involvement limited by external factors, both Camp David and the Oslo Process left Palestinians in particular, unconvinced of their supposed success. CAMP DAVID: 2000 – The Mediation of Bill Clinton Arguably, the negotiations at Camp David in 2000 should have been the most successful thus far. Combining the symbolic setting of earlier successes, the ‘friendly’ atmosphere crafted in Norway in 1993 and a stronger mediating power with the ability to apply pressure on both sides in the form of Bill Clinton, the failure to produce any formal agreement might appear somewhat strange. It is in consideration of both the nature of third-party involvement, and more importantly, the wider circumstances in which this intervention took place, that the ultimate lack of success in 2000 is explained. From the offset, the aims of these negotiations were ambitious. Hoping to build on both rejuvenated calls for peace sparked by the election of Ehud Barak in 1999 and the limited progress made along the Stockholm Track before discussions reached a deadlock in May, it was hoped that Camp David would bring about the conclusion of both a Framework Agreement (FAPS) and Comprehensive Agreement (CAPS) on Permanent Status.[27] Conducted in an American-curated atmosphere of causal camaraderie (from the lack of business attire to opportunities for delegations to dine and exercise alongside each other),[28] Clinton combined a hands-on style mediation that identified him as the architect of the environment with an emphasis on informality. Hosting a variety of discussions, from committee meetings on individual issues like water and the economy, to one-on-ones with each state leader, and the familiar maintenance of privacy in the issuing of only a single phone to each delegation,[29] Bill Clinton’s active approach was initially successful in encouraging serious discussion. Despite this, however, the fourteen days that Ehud Barak and Yasser Arafat spent negotiating eventually proved fruitless. The reasons for this failure are twofold. First, as in previous cases, the mediator in question played a significant role. While on a surface level, the discussions at Camp David in 2000 might appear to be almost entirely resemblant of those conducted by Jimmy Carter two decades earlier, they could not have been more different. Although both Presidents had hoped to use their respective negotiations as a tool to conjure domestic support, Bill Clinton’s desire for a crowning political achievement lacked the foundations of morality and principle on which Carter built his reputation. Consider this lack of personal appeal alongside the implications of the geopolitical environment in which negotiations took place, and the ultimate failure of Camp David appears almost inevitable. Having stressed to Clinton on numerous occasions that the moment was far from ripe for peace (particularly considering the lack of agreement between Israel and Palestine in Stockholm), when the US chose to proceed anyway, Yasser Arafat arrived at Camp David disillusioned.[30] Believing his presence to be a result of Clinton’s domestic aspirations that saw the need to conclude peace before Barak suffered from a vote of no confidence, a two-against-one dynamic was established. Not only did the US President fail to prepare the ground for Arab negotiation in disregarding Arafat’s initial concerns, but his consistently pro-Israel stance also meant he lacked respect from the Palestinian delegation.[31] Without this foundation of mutual trust between party and mediator, the conflict between Israel and Palestine on issues like the boundaries of withdrawal, and the reluctance of both parties to compromise (for Barak a simple expression of willing had sparked the end of his political career) proved irreconcilable. In this, the fact remains that external involvement can only achieve so much on its own – when the political moment is not itself conducive to peace, even the most successful mediator is likely to fail. Concluding Remarks Both the nature and roles of third-party actors in the Arab-Israeli peace processes of 1978, 1993 and 2000 differed. From Jimmy Carter’s active mediatory participation, to a more neutral style of Norwegian facilitation, external involvement in the 20th century succeeded in creating an environment angled towards progress. Culminating in three formal agreements, it was the collaborative spirit created by academics in Oslo and the influence of US presence that allowed adversaries on both sides of the conflict to co-operate. Despite this, however, these talks are as united in their flaws as they are their successes. With the influence of personal ambition and limited domestic strength establishing an Israeli bias in both instances, the agreements created lacked substance. It is in the case of Bill Clinton’s 2000 Camp David talks however, when the limiting implications of active mediation were combined with an unfavourable political moment, that the finite capabilities of third-party participants are illuminated. In refusing to address Palestinian concerns in favour of pushing peace for the sake of his own reputation, Clinton failed to establish a sense of trust. Consider this alongside the lack of confidence in both parties (a result of tense personal relations and the failure of earlier attempts at peace) and even the President’s hands-on approach lacked the ability to force reconciliation. In this, and in the vague ambiguities of the agreements created decades earlier, the ultimately limited nature of external involvement is highlighted. Harriet Solomon is currently working towards an MA in Modern History at the London School of Economics. [1] Hilde Henriksen Waage, ‘Norway’s Role in the Middle East Peace Talks: Between a Strong State and a Weak Belligerent’, Journal of Palestine Studies, 34:4 (2005), p. 8. [2] Kirsten E. Schulze, The Arab-Israeli Conflict (London: Routledge, 2008), pp. 53-54. [3] ‘Camp David Accords – The Framework for Peace in the Middle East’, National Archives, (1978) [Accessed 3 May 2021]. [4] Daniel Strieff, Jimmy Carter and the Middle East: The Politics of Presidential Diplomacy (New York: Palgrave Macmillan, 2015), p. 124. [5] Ibid, p. 121. [6] ‘Statement by President Carter prior to his departure for Camp David – 3 September 1978’, Israel Ministry of Foreign Affairs, pp. 4-5, (1977-1979) [Accessed May 5 2021]. [7] Kenneth W. Stein, Heroic Diplomacy: Sadat, Kissinger, Carter, Begin and the Quest for Arab-Israeli Peace (London: Routledge, 2002), p. 252. [8] Strieff, Jimmy Carter and the Middle East, p. 130. [9] Ibid, p. 122. [10] Shibley Telhami, ‘Evaluating Bargaining Performance: The Case of Camp David’, Political Science Quarterly, 107:4 (1992-93), pp. 630-631. [11] Tom Princen, ‘Camp David: Problem-Solving or Power Politics as Usual?’, Journal of Peace Research, 28:1 (1991), p. 58. [12] Strieff, p. 123. [13] Janice J. Terry, ‘The Carter Administration and the Palestinians’, Arab Studies Quarterly, 12:1/2 (1990), p. 159. [14] Ibid, p. 155. [15] Ibid, p. 157. [16] Nigel Ashton, ‘Taking Friends for Granted: The Carter Administration, Jordan and the Camp David Accords 1977-1980’, Diplomatic History, 41:3 (2017), p. 620. [17] Schulze, The Arab-Israeli Conflict, p. 80. [18] Avi Shlaim, ‘The Oslo Accord’, Journal of Palestine Studies, 23:3 (1994), p. 30. [19] Ibid, pp. 24-25. [20] Waage, ‘Norway’s Role in the Middle East Peace Talks’, p. 8. [21] Schulze, p. 80. [22] Jane Corbin, Gaza First: The Secret Norway Channel to Peace Between Israel and the PLO (London: Bloomsbury, 1994), p. 67. [23] Waage, pp. 19-20. [24] Ibid, p. 18. [25] Shlaim, ‘The Oslo Accord’, p. 24. [26] Ziad Abu Amr, ‘The View from Palestine: In the Wake of the Agreement,’ Journal of Palestine Studies, 23:2 (1994), p. 77. [27] Schulze, pp. 83-84. [28] Akram Hanieh, ‘The Camp David Papers,’ Journal of Palestine Studies, 30:2 (2001), p. 77 [29] Ibid, pp. 77-78. [30] Ibid, p. 76. [31] Ian S. Lustick, ‘Camp David II: The Best Failure and Its Lessons’, Israel Studies Bulletin, 16:2, (2001), p. 5.
- The influence of ideas and practices on elite roman houses in the late republic and early empire
Ideas and practices of the Roman elite undeniably altered and informed the design and decoration of their houses and villas. Whilst the perception of luxury shifted over time, which led to some changes in the style of decoration later on, ideas such as the balance between work and nature, wealth or austerity and commemoration, as well as the practices of relaxation or those in the world of business, continually influenced how these houses were constructed and decorated. The first idea and practice that informed the design and decoration of elite Roman houses and villas is that of the house being a place for business, and the activities that were carried out there. Vitruvius writes extensively on this in his De Architectura,[1] detailing that those who do business in country produce must have stalls and shops in their entrance courts, with granaries and storerooms. He explains that the focus of country villas should be on keeping the produce fresh and in a good condition, rather than focusing on ornamental beauty and luxury, thus suggesting that houses in the country, whilst being focused upon the production, storage, and harvest of produce, were not used for business meetings, as, according to Vitruvius, they should be kept on the more modest side. However, he does state that for capitalists and farmers of the revenue of these sites, their residences should be comfortable and showier, they must be secure against robbery, and roomy and handsome enough to accommodate meetings with advocates and public speakers, suggesting that for the houses in the town and city, there is more space for luxury and beauty, as guests will inevitably be taken into the home for business matters. And indeed, this is supported by the archaeological evidence of houses. For example, the House of the Mosaics at Herculaneum contained a structure with a series of pillars on either side, and a second story with clerestory windows, which are reminiscent of a public basilica. Furthermore, both the House of Menander and the House of the Cervi had very large rooms marked out by pediments on the outside walls, much like fastigium, the apex or summit of a roof. When one considers that Caesar’s own house was joined to the Regia using a fastigium, an air of regality and importance is impressed,[2] and these houses can be linked to some of the most important public and political buildings in Rome. It is as though the owners of these houses wanted to demonstrate that the house was inextricably linked with business, and thus wished to bring this part of life into the home, not only through practice, but also architecture. Whilst it was important for the house to be seen as a place of business, Roman writers seem to have believed that work taking over the house was a negative thing, and that nature should be incorporated into the house too. Whilst the house was a place for business to be conducted, it was seen to be poor form to isolate and sequester oneself inside of the study whilst working, and to stay inside was seen as an act of cowardice. To show a lack of engagement, as Cicero puts it, was nefarious.[3] Thus, whilst Vitruvius does state that it was important for men of rank, who held official positions and magistracies and who had a social obligation to their fellow citizens, to have libraries, picture galleries and basilicas, finished in a similar style to public buildings, it is also equally as important for them to have spacious atriums and peristyles, with plantations and walks to some extent in them.[4] Cicero agrees, saying that the paved colonnade gave his brother’s villa and air of great dignity, as did the fishpond, the fountains, the palaestra, and the shrubbery. To these writers, the incorporation of nature within the house was a crucial part of the ideology of being a good and honest businessman. The House of Octavius Quartio in Pompeii is a key example of this. As can be seen in Fig.1, the house has an enormous garden area, larger than the actual house itself. This house almost exactly fits the idealised description that Vitruvius provides – it has extensive dining areas and is lavishly decorated, whilst also having enough space for escaping from the world of work. A man’s house was also an important means of building up one’s political power and to expressing their existing power. Firstly, Cicero[5] tells us about one Gnaeus Octavius, the first consul of his family. He is said to have built an attractive and imposing house upon the Palatine, which was visible to everyone. Cicero posits that this helped to gain him votes in his canvass for consulship. This seems to suggest that building impressive and ‘attractive’ houses was important for boosting one’s visibility within society and one’s political power. Indeed, the villa could be seen as a supreme symbol of an individual’s power and resources, and, at least to a modern audience, could be a symbol of brutal and unquestionable Roman power.[6] One’s house could also provide someone with the opportunity to show off their lavish wealth. The Satyricon by Petronius provides great insight into this, describing in great detail the decorative choices that Trimalchio has made for his house in the city. He writes that a golden cage hung in the doorway with a magpie inside[7], that there were elaborate frescoes on the walls, depicting Minerva taking Trimalchio dressed as Mercury to Rome, and on another Mercury, taking Trimalchio by the chin and leading him up to a high throne.[8] He also describes the overly extravagant choices of dining equipment Trimalchio provides, such as a bronze donkey with two panniers to hold olives, and two great silver dishes, with Trimalchio’s name and their specified weights engraved into them. These decorations are clearly luxurious and lavish, but they are clearly looked upon by Petronius in a mocking light, and he sees them as garish and disreputable, rather than impressive. He writes that as everybody else kissed Trimalchio’s portrait, he himself was ashamed to even pass by it.[9] Indeed, there are conflicting views towards the showing-off of wealth. Seneca writes in his Moral Epistles[10] that people think of themselves as poor and mean if they do not show off everything they have; if their ceilings are not buried in glass, if their swimming pools are not lined with Thasian marble, or if their walls are not resplendent with mirrors. He seems to think of these expressions of wealth as blasphemous, comparing the pristine pools that were once found in temples to those now corrupted with the sweat of those who climb into them. Varro agreed, saying that in the first century BCE when he was writing, a gymnasium each is not enough, and people think that they do not have a real villa unless it tinkles with Greek names which they attach to certain places.[11] Therefore, men were also motivated by the less ‘moral’, at least to Cicero and Varro, need to show off their wealth and their extravagant lifestyles. The idea and practice of commemoration was a crucial one when it came to the decoration and design of houses. Scipio, the first attested villa owner, posited that his house was an extension of himself, and Seneca supports this by saying that the house is the reflection of a man’s character.[12] Indeed, the destruction of somebody’s house was seen to be damning of their memory, and by the first century CE, it was seen as a deliberate and clear hostile act.[13] Many of these houses would have been handed down through generations of a family, and those who would come to own these properties would have grown up in them. It is possible that they felt a sense of obligation to maintain these homes because of the familial links and loyalties that lay within the home. The Lararium at the Villa of Volasii Saturnii was filled with an odd and large collection of ancestral portraits that were passed through the generations,[14] and so the concept of familial pietas, dedication towards one’s family and ancestors, perhaps came into play here. The sense of duty they felt towards their family may have influenced the way they designed their houses, and also influenced the way they decorated them too. The assemblages of statues in both urban houses and rural villae, whilst perhaps being an attempt to create a mythical landscape and to create objects of interest, could also have been representative of the owners of a house, seeking to idealise them.[15] Verres, who was alive in the first century BCE, took many statues made by Praxiteles, Myron and Polykleitas, three of the most skilled and famous Greek sculptors of their times, famous for their creation of the ‘ideal’ man (such as the Doryphoros statue by Polykleitas (Fig.2) or the Diskobolos by Myron (Fig.3 – a roman marble copy), and took them to his urban domus and his many villae.[16] When one links this with the idea of commemoration, perhaps these statues being representative of the past owners of the villa could be a longing to represent these people as idealised and provide them with the best memory possible. A key change that occurred across the shift from republic to empire was the restrictions on public display, and so it makes sense that people sought other, more private ways to prolong or show off somebody’s memory.[17] The ideas and practices of tranquillity and peacefulness were also crucial to the design and decoration of elite roman homes and villae. Horti, the group name for villae in the country, served as a place of separation from the business of city life, and offered refuge and leisure to the owner.[18] Cicero writes in his Letters to his friends, that he has constructed some new sitting-rooms on his Tusculan property, and that he wishes to ornament them with frescoes, simply because he ‘take[s] pleasure in anything of that sort.’[19] He also writes to Quintus about the ‘admirable summer room’ that is being constructed (Letters to his brother Quintus, 3.1),[20] showing that in any property, space for relaxation can be created or found. The Villa of Publius Fannius Synistor at Boscoreale, constructed shortly after 50BCE, provides us with archaeological evidence for this. There was a fully heated bath complex on site, as well as there being a tranquil garden with marble fountains.[21] The reception rooms also contained life-sized figures as frescoes upon the walls – Venus, Bacchus and the three Graces, a ‘triad of divine beauty’.[22] As well as providing enjoyable and relaxing activities to the owner, these frescoes perhaps also allowed for a closer sense of connection to the divine – the depiction of these deities on a wall of a room which would receive frequent visitation perhaps brings a sense of religious tranquillity too. Lastly, the changes in decoration and design of these places of residence over time must also be discussed. The perception of luxury and its reception changed over time. Indeed, there is no one fixed way to define luxury, there can be no absolute standard. It is all relative to one’s personal taste, as well as the commodities available at the time. Nevertheless, as aforementioned, luxury has been met with both positive and negative reactions, with people such as the character of Trimalchio seeing it as a positive, and others, such as Cato, living in the second century BCE, who outwardly boasts about the lack of stucco on his walls,[23] or Seneca, alive in the first century CE, who bemoans the materialistic nature of society,[24] seeing over-extravagance as a negative. Indeed, political factors could have come into play here, with laws restricting excess under Augustus, him destroying part of the opulent Vedius Pollio’s house[25] because he had become too infamous for his luxury-loving lifestyle. It seems, indeed, that later on, in the first century CE, there was a shift from luxury and extravagance to simplicity, Pliny the Elder writing about how overindulgence led to moral corruption and protesting the use of a gold ring to mark the equestrian status.[26] There appears to be a shift back into the ‘Greek style’, moving from the extravagant nature of roman houses to the more restrained, simplistic, and austere nature of those of Greece.[27] To conclude, whether influenced by business activity, the balance between work and nature, relaxation, the act of commemoration and the representations of idealism that came with that, wealth and luxury or austerity, the design and decoration of the housing of the Roman elite provides a clear view into the motivations, practices, and ideas that society held, and how these ideas and the reception of opulence changed over time. Hannah Newman has just completed her first year of a BA in Classical Archaeology and Ancient History at the University of Oxford. Images: Notes: [1] Vitruvius, De Architectura, 6.5. [2] Andrew Wallace-Hadrill, ‘The Villa as Cultural Symbol in The Roman Villa’ in A. Frazer (ed.), The Roman Villa (Philadelphia, 1998), p. 19. [3] Susan Treggari, ‘The Upper-class House as Symbol and Focus on Emotion in Cicero’, Journal of Roman Archaeology 12 (1999), pp. 41-44. [4] Vitruvius, De Architectura, 6.5. [5] Cicero, On duties, 1.138 [6] Wallce-Hadrill, ‘Villa’, p. 43. [7] Petronius, Trimalchio’s Dinner Party: Satyricon, 28. [8] Ibid., 29. [9] Ibid., 60. [10] Seneca, Moral Epistles, 86. [11] Varro, Res Rusticae, 2.1 [12] John Bodel, ‘Monumental Villas and Villa Monuments’, Journal of Roman Archaeology, Vol. 10 (1997) p. 5. [13] Ibid., pp. 7-10. [14] Ibid., p. 27. [15] Kim J. Hartswick, The Gardens of Sallust: A Changing Landscape (Austin, 2004), pp. 17-18. [16] Alessandra Lazzeretti, ‘Verres, Cicero and other Collectors in Late Republican Rome’, in Maia W. Gahtan and Donatella Pegazzano (eds.), Museum Archetypes and Collecting in the Ancient World (Brill, 2015), p. 91. [17] Bodel, ‘Monumental Villas’, p. 31. [18] Hartswick, The Gardens of Sallust, p. 16. [19] Cicero, Letters to his Friends, 7.23. [20] Cicero, Letters to his brother Quintus, 3.1. [21] Bettina Bergmann, ‘New perspectives on the Villa of Publius Fannius Synistor at Boscoreale’, Metropolitan Museum of Art Bulletin 67.4 (2010), p. 14. [22] Ibid., p. 22. [23] Cato, On Agriculture, fr. 175. [24] Seneca, Moral Epistles, 86. [25] Tacitus, Annals, 3.55. [26] Pliny, Natural Histories, 33. [27] Wallace-Hadrill, ‘Villa’, p. 21.










