Search Results
58 items found for ""
- From Ritual to Commodity: How Does the Study of Drugs Illuminate Early Modern Globalisation?
In a phenomenon historian David Courtwright termed the "psychoactive revolution," the seventeenth and eighteenth centuries witnessed the globalisation of drugs, transforming them from regional curiosities into essential parts of daily life around the world. [1] While the r ise of global trade in the early modern era is often attributed to newly established connections and exotic commodities, factors beyond simple profit and transportability determined a substance's success. One such example is peyote, a psychoactive cactus native to Central America. Despite its ease of cultivation and transport, peyote never achieved widespread globalisation. Cultural frameworks and challenges associated with the distribution of mind-altering substances have increasingly been explored by scholars of this period. Benjamin Breen emphasised the impact of social and religious codes on the globalisation of Native American drugs like ayahuasca and peyote, illustrating their significance within broader early modern themes such as imperialism and the modernisation of commercial and political systems. [2] Matthew Romaniello and Lauren Working explored the cultural dimensions of tobacco's evolution from a foreign plant to a cash crop, focusing on its integration with the Western humoral system, gender-related aspects of tobacco use, and its transformation from an indigenous product to a staple of Western urban life. [3] Marcy Norton highlighted chocolate's role in examining power dynamics and cultural transmission in colonial settings, demonstrating its importance in understanding the complexities of early modern globalisation. [4] By examining peyote, tobacco, and chocolate as case studies of successful and unsuccessful globalisation, this essay contributes to the scholarship of Breen, Working, and Norton. It demonstrates how drugs can illuminate the religious and cultural frameworks of early modern globalisation. Additionally, through a comparative analysis of tobacco's dissemination in early modern Russia and China, it highlights the limitations of using drug history to draw comprehensive conclusions about the emergence of global trade. Peyote’s trails and Christianity Today, the word "drug" carries negative connotations, implying something harmful, addictive, and often illegal. In contrast, the early modern period used the term more neutrally, encompassing a wider range of commodities like chocolate, coffee, alcohol, tobacco, and opiates. [5] Before commercialisation, many commodities blurred the lines between food, medicine, and mind-altering substances. People believed these items all possessed the power to influence and alter the body. [6] Due to the power dynamics of early modernity, the dissemination of drugs recognised for their medical benefits was conditioned by their adaptation to European mindsets, particularly their compatibility with Christianity. Colonised territories were abundant with substances that had unique spiritual contexts and practices, distinct from those of the colonisers. The globalisation of hallucinogens used in religious rituals, such as ayahuasca and peyote, was significantly affected by these religious biases. In Amerindian rituals, peyote held high value for its ability to induce visions. These visions, believed to hold prophetic power, could offer insights into future events like weather patterns. Additionally, peyote was sometimes used in rituals to help identify the culprit behind a theft. [7] Western medicine, steeped in Hippocratic and Galenic traditions, struggled to accept plants for anything beyond curing ailments, poisoning enemies, or basic sustenance. [8] Moreover, peyote rituals did not reference Christ or his influence, contrasting sharply with Christian revelations. The ability to see and predict the future through the ingestion of a plant was therefore deemed superstitious and idolatrous. Rather than viewing it as a natural phenomenon, Europeans often considered it a deception orchestrated by Satan. [9] Accordingly, Pérez de Ribas, a Spanish Jesuit, described the use of peyote in Coahuila (North Mexico) as bringing ‘diabolical fantasies.’ [10] The Inquisition prohibited the use of peyote in 1620 due to its perceived conflict with the purity and integrity of the Catholic faith. [11] Any use or distribution of the drug was absolutely prohibited, with transgressors facing penalties equivalent to those imposed for heresy. [12] In contrast, Indigenous populations, particularly the Chichimeca, continued to revere and consume peyote. It became a powerful tool in their resistance to Christianisation, persisting in religious ceremonies in northern Mexico well into the late eighteenth century. [13] Therefore, peyote's potential for global commerce and dissemination was thwarted in favour of prioritizing the spread of Christianity. Tobacco, Medicine and Commodification Not all drugs entangled in demonic discourse failed to achieve global acceptance. Tobacco, despite its Native American origins and initial negative perception, became widely popular due to its other properties, such as being a fumigant and a non-spiritual cure for diseases. Even as debates ensued over whether tobacco could alter European bodies and make them more like Native Americans, the substance adapted well to the European humoral system and was classified as having a dry heating effect on the body. [14] Nicholás Monardes' assertions that tobacco could treat a wide range of illnesses, particularly those caused by excess moisture, cold, or phlegm, led physicians to embrace the substance. [15] With the discovery of less potent strains of tobacco, its hallucinogenic effects became less accessible. Instead of being preoccupied with demonic associations, people's values shifted to focus on tobacco's psychoactive and medical properties. [16] The study of peyote and tobacco offers profound insights into how Christianity influenced the globalisation of drugs, highlighting the interplay between religious beliefs and the acceptance or rejection of various substances across cultures. Tobacco's original cultural meanings were also transformed to meet European demands. Native rituals, deemed savage, had to be desacralised or erased, leaving only the raw materials. Tobacco was then infused with new meanings and associations to integrate and 'civilize' it within European society. [17] In early modern England, the habit of smoking tobacco was popularised by Sir Francis Drake and Walter Raleigh, whose status as English explorers and heroes became intertwined with the image of pipe smokers. [18] Among the elites, publications such as Henry Butts’ Dyets's dry dinner attempted to incorporate tobacco into formal dining etiquette as the last course following fruits, fish, white meats etc . [19] A similar transformation occurred in the material culture of tobacco, where the status of a gentleman became associated not only with what he smoked but also with the accessories he carried. Intricate engravings of imperial and English heraldic symbols on pipes, pipe cases, and snuff boxes reinforced the idea of English domestication and a sense of ownership over the product. Alongside coffee and chocolate, tobacco became an integral part of the emerging English public sphere, from coffeehouses to social gatherings. [20] The transformation of tobacco from an indigenous substance to an 'English' commodity illustrates the cultural dimension of globalisation, marked by the separation and recontextualization of indigenous substances before their integration into European cultures. The Taste of Chocolate In contrast to modern perceptions, early modern people viewed chocolate as much a drug as tobacco. Although the components such as methylxanthine, theobromine, fats, and sugars found in cocoa were not fully understood at the time, it is true that they can stimulate the brain to produce effects similar to opiates. [21] Cocoa was highly valued in Indigenous societies across America and used for medicinal purposes and ceremonial celebrations, like marriages or military victories . [22] Historically, scholars like Alan Davidson and Sophie and Michael Dobzhansky Coe attributed chocolate's success in Europe and worldwide to its adaptation to fit the European taste palate. This modification of taste promoted widespread consumption through increased familiarity. [23] Unlike tobacco, chocolate drinking did not conform to the conventional distinction between "barbarous" native customs and "civilised" European customs. Enjoyed at social gatherings primarily by the elite, chocolate drinking was likened to wine consumption in Europe . [24] Like other substances of its time, cacao was incorporated into the humoral medical framework and prescribed by Spanish doctors for ailments such as stomach aches, fevers, and indigestion . [25] Originally perceived as bitter and sharp, attributed to the inclusion of chili, the taste of chocolate was appropriated and transformed through sweetening with sugar, cinnamon, milk, and spices such as black pepper and anise, which were not traditionally used by Indigenous populations. [26] Chocolate, similar to tobacco, achieved global success when it was adapted to European practices: its flavour profile was modified, and Western medical discourse supplanted Indigenous symbolism. Marcy Norton offered a critique of this perspective. She argued that analysing the history of chocolate and its adoption by the Spanish reveals that they did not aim to erase its Indigenous origins or reshape it solely to fit their preferences. The widespread consumption of chocolate originated with Indigenous peoples, and Europeans not only acquired a taste for their methods of chocolate preparation but also endeavoured to replicate them . [27] Norton emphasised that the Spanish internalised Indigenous tastes as a key factor in understanding the subsequent high demand among Europeans for Amerindian stimulants . [28] Despite the victories of colonisers over Indigenous societies and the devastating impact of diseases like smallpox on Native Americans, the Spanish remained a minority presence in America during the sixteenth century. They found themselves immersed in the enduring cultures and practices of the pre-Columbian era, which persisted and surrounded them daily. Indigenous women fostered chocolate consumption as those responsible for food preparation in Spanish-Native American households. As the strategy of conquest through intermarriage advanced, an increasing number of Spaniards grew accustomed to the Amerindian taste for chocolate. [29] At the same time, chocolate became a popular and readily purchasable commodity in both Amerindian village and city marketplaces. Despite these spaces being viewed as native domains, Europeans were also welcomed to attend and participate in the trade of chocolate . [30] The evolution of chocolate preparation methods stemmed from the challenges posed by long-distance travel, as not all ingredients could be safely transported around the world. With a growing appreciation for chocolate, the Spanish aimed to replicate the experience in Europe rather than radically change it. Sweetening the drink was not a new concept, as it had already been practiced by Native Americans. The substitution of sugar for honey helped preserve the desired taste. Similarly, chili was replaced with black pepper to recreate the original bitterness and spiciness of the drink. [31] The history of chocolate provides valuable insights into the power dynamics of globalisation and its cultural complexities. It underscores that colonisation was not a unilateral process; interactions between colonisers and the colonised led to the incorporation of Native American cultures into Spanish culture and their subsequent dissemination worldwide. The Limitations While studying the dissemination of drugs offers valuable insights into globalisation processes and the dynamics of power, it also has its limitations. These limitations become evident when over-generalisations occur, potentially creating a distorted view of the global landscape. This issue is particularly pronounced when examining the state of globalisation in early modern Russia and China through the lens of tobacco trade. During the late Ming and Qing periods, tobacco was imported into China by maritime traders from various regions worldwide. By the early 1600s, tobacco had become firmly established as a commercial crop, particularly in the coastal regions of Fujian, despite initial regulatory measures by the administration. [32] Within twenty years, tobacco had been successfully acclimatised and integrated into China's exports, particularly to regions inhabited by Mongols, Uzbeks, and Kazakhs. This development not only contributed to government revenue but also bolstered trade opportunities for merchants. [33] Yellow flower tobacco, notably linked to the region of Lanzhou, became a prised local commodity across Eurasian territories, almost attaining native status despite its non-Indigenous origins. Contrary to widespread belief, it did not originate on the continent but was introduced to China by European traders who encountered it through interactions with Amerindian farmers. The renowned Lanzhou strain can be traced back to the Andes region. [34] China's pivotal role in the early modern global trade system is underscored by its extensive involvement in the tobacco trade . In contrast, Muscovite Russia appears isolated from the global tobacco economy. Because of their orthodox religious approach and opposition to foreign markets, the Russian authorities opposed tobacco for centuries. Foreign customs and habits, which were perceived as chaotic, were opposed by the Muscovite doctrine. [35] The lack of biblical references to tobacco led Muscovy, which country centred its identity around the church, to stronger opposition. Tobacco possession became illegal and in some cases was punishable by death. [36] This does not imply that tobacco was non-existent in its territories. It was smuggled but its significance remained minimal because opposition to tobacco extended beyond the authorities to the general population. Despite Peter the Great legalizing the tobacco trade in an attempt to modernize Russia, it ultimately became a tool used in opposition to the Tsar's politics. [37] Due to Russia's orthodox approach and opposition to foreign markets, until the nineteenth century, the tobacco trade did not reach the cultural and economic significance it did in other parts of the world. Based on the trade of tobacco, it might initially appear that China played a more significant role in early modern globalisation compared to Russia. However, such a broad generalisation can lead to misleading conclusions. Robert Brenner's study of early modern London merchants offers a nuanced perspective, revealing Russia's profound integration into the global trade network. London merchants involved with the Muscovite Company, as well as those engaged in trade with the Levant and the East Indies, were also heavily invested in Russia, particularly in the lucrative cloth trade. This interconnectedness highlights Russia's substantial role in early modern global commerce, challenging simplistic comparisons based solely on the tobacco trade. [38] Furthermore, Maria Salomon Arel's research on the Muscovy Company underscores Russia's pivotal and proactive role in the burgeoning globalised economy of the early modern period. Through their supply of high-quality ship-building materials like flax and hemp, Russia contributed significantly to the prosperity of England's shipbuilding industry. This trade not only facilitated the expansion of England's maritime capabilities but also underpinned sustained intercontinental trade. The superior quality of Russian flax and hemp made them indispensable resources for ship construction, highlighting Russia's crucial economic and strategic importance in the emerging globalised world of the era. [39] Also notable is Russia's active involvement in the trade of medicinal drugs such as rhubarb and cinchona bark, evidenced by significant shipments from the latter half of the seventeenth century. [40] Despite not participating extensively in the tobacco trade, Russia engaged robustly in the broader early modern medical trade, maintaining connections with Europe, particularly the English and Dutch, as well as with America and Asia. Russian participation in early modern globalisation, while perhaps not as prominent as that of Spain or England, was nevertheless interconnected with global networks. However, an examination of tobacco's history in the region reveals a distinct perspective. When using the history of drugs to illustrate the processes of early modern globalisation, it is crucial to exercise caution and avoid oversimplification. A country's contribution to this transformative process cannot be adequately assessed through the lens of a single product alone. Conclusion Historical contexts of drugs enhance our understanding of the globalisation process. The examples of peyote and tobacco demonstrate that the acceptance and commercialisation of stimulants required more than just medical approval; they had to conform to European religious and cultural frameworks. Peyote's hallucinogenic properties, which European colonisers associated with demonism, led to its suppression rather than popularisation. In contrast, tobacco, once stripped of its demonic associations, had to be detached from its indigenous cultural meanings, becoming a raw substance that could be recontextualised and domesticated within European society. The study of drugs also sheds light on the dynamics of colonisation and cultural adoption. The popularisation of chocolate among the Spanish illustrates that the colonial process was not a one-sided imposition of European culture. Instead, colonisers adopted indigenous tastes and habits, spreading them worldwide. However, using drug history to gain insight into globalisation has limitations. Focusing on one substance alone cannot provide a comprehensive picture. Yet, because drugs played an integral role in early modern society, their stories, when critically analysed, can significantly enrich our understanding of early modern globalisation. Sandra Liwanowska has just completed an MPhil in the History and Philosophy of Science and Medicine at the University of Cambridge (Clare Hall College). Notes: [1] David T. Courtwright, ‘Introduction’, in Forces of Habit : Drugs and the Making of the Modern World (Cambridge, Mass.: Harvard University Press, 2001) , pp. 1-9; Also discussed in Mike Jay, ‘The Psychoactive Revolution’, in High Society: Mind-Altering Drugs in History and Culture (London: Thames & Hudson, 2010), p. 143. [2] Breen Benjamin, ‘Drugs and Early Modernity’, History Compass , 15.4 (2017), p. 1-9; Breen Benjamin, ‘The Failed Globalisation of Psychedelic Drugs in the Early Modern World’, The Historical Journal, 65.1 (2022), pp. 12-29. [3] Lauren Working, ‘Tobacco and the Social Life of Conquest in London, 1580-1625’, The Historical Journal , 65.1 (2022), pp. 30-48.; Matthew P. Romaniello, ‘Who Should Smoke? Tobacco and the Humoral Body in Early Modern England’, The Social History of Alcohol and Drugs , 27.2 (2013), pp. 156-173. [4] Marcy Norton, ‘Tasting Empire: Chocolate and the European Internalisation of Mesoamerican Aesthetics’, The American Historical Review , 111.3 (2006), pp. 660-691. [5] Breen, ‘Drugs and Early Modernity’, p. 3. [6] Breen, ‘Drugs and Early Modernity’, p. 3; Courtwright, Forces of Habit, p. 2-3; On the humoral aspects of early modern drug consumption: Phil Withington, ‘Addiction, Intoxicants, and the Humoral Body’, The Historical Journal 65.1 (2022), pp. 68-90. [7] Omer C. Stewart, ‘Peyote Eaters and Their Ceremonies’, in Peyote Religion: A History (University of Oklahoma Press: Norman, 1987), p. 24-25. [8] Angélica Morales Sarabia , ‘ The Culture of Peyote: Between Divination and Disease in Early Modern New Spain’, in Medical Cultures of the Early Modern Spanish Empire , ed. by John, Slater, Terrada, López, María, Luz, and José, Pardo Tomás (Surrey, England; Burlington, Vermont: Ashgate, 2014), p. 28 [9] Morales Sarabia , Medical Cultures of the Early Modern Spanish Empire , p. 27, 29. [10] Andrés Pérez de Ribas, ‘Missions of the Central Plateux in Mexico’, in My Life Among the Savage Nations of New Spain, trans. by Tomas Robertson (The Ward Richie Press: Los Angeles, California, 1968), p. 227. [11] Morales Sarabia , Medical Cultures of the Early Modern Spanish Empire , p. 29-30 . [12] Stewart, Peyote Religion: A History , p. 21. [13] Philip W. Powell, ‘Warriors in the North’, in Soldiers, Indians & Silver : North America's First Frontier War (Arizona State Univ., Centre for Latin American Studies, 1975), p. 42 ; Omer C. Stewart, Peyote Religion: A History , p. 26-28.; Peter T. Furst, ‘ "Idolatry," Hallucinogens, And Cultural Survival ’, in Hallucinogens and Culture (Chandler & Sharp Publishers Date, 1976), p. 19. [14] Romaniello, ‘Who Should Smoke?’, p. 158-159, 161.; For more on debates about food’s alleged properties to transform Spanish and Indian bodies: Rebecca Earle, ‘Humoralism and the Colonial Body’, in The Body of the Conquistador: Food, Race and the Colonial Experience in Spanish America, 1492-1700 (Critical Perspectives on Empire) (Cambridge: Cambridge University Press, 2012), pp. 41-47. [15] Nicolás Monardes, Joyfull Newes out of the Newe Founde Worlde , trans. by John Frampton (1577), Fol. 34-35. ; Romaniello, ‘Who Should Smoke?’, p. 159. [16] Courtwright, Forces of Habit, pp. 56-57. [17] Working, ‘Tobacco and the Social Life of Conquest in London, 1580-1625’, p . 43. [18] Romaniello, ‘Who Should Smoke?’, pp. 165-166. [19] Henry Butts, Dyets dry dinner consisting of eight seuerall courses: 1. Fruites 2. Hearbes. 3. Flesh. 4. Fish. 5. whitmeats. 6. Spice. 7. Sauce. 8. Tabacco. All serued in after the order of time vniuersall. By Henry Buttes, Maister of Artes, and fellowe of C.C.C. in C. (Printed in London: By Tho. Creede, for William Wood, and are to be sold at the west end of Powles, at the signe of Tyme, 1599), https://quod.lib.umich.edu/e/eebo/A17373.0001.001?rgn=main;view=fulltext , [Accessed 22 March 2022]. [20] Working, ‘Tobacco and the Social Life of Conquest in London, 1580-1625’, p p . 42, 48. [21] Norton, ‘Tasting Empire’, p. 668. [22] Philipe Nondedeo, ‘Cacao in the Maya world: feasts and rituals.’, in ‘Chocolate: Cultivation and Culture in Pre-Hispanic Mexico’, ed. by Margarita de Orellana, Richard Moszka, Timothy Adès, Valentine Tibère, J.M. Hoppan, Philippe Nondedeo, and others, Artes de México , 103, 2011, pp. 73-75. [23] Alan Davidson, ‘Europeans’ Wary Encounter with Potatoes, Tomatoes, and Other New World Foods,’ in Chilies to Chocolate: Food the Americas Gave the World, ed. by Nelson Foster and Linda S. Cordell (Tucson, Ariz., 1992), p. 3.; Similar views were presented in Sophie Dobzhansky Coe, and Michael Dobzhansky Coe, ‘Encounter and Transformation’, in The True History of Chocolate 3rd. edition (New York: Thames and Hudson Ltd , 2013), pp. 89-103. [24] Ina Baghdiantz McCabe, ’ American Gold: Sugar, Tobacco and Chocolate’, in A History of Global Consumption : 1500-1800 (Oxfordshire, England ; New York, New York: Routledge, 2015) , pp. 67.; Marcy Norton, ‘Encountering Novelties’, in Sacred Gifts, Profane Pleasures: a History of Tobacco and Chocolate in the Atlantic World (Ithaca [New York]; London: Cornell University Press, 2010), p. 55-59; Norton, ‘Tasting Empire’, p. 669. [25] Amanda Lange, ‘Chocolate preparation and serving vessels in early North America.’, in Chocolate History, Culture, and Heritage, ed. by Louis E. Grivetti, and Howard-Yana Shapiro (John Willey and Sons: Hoboken, NJ, 2009), p. 129; Margaret A. Graham,, and Russell K. Skowronek, ‘Chocolate on the Borderlands of New Spain’, International Journal of Historical Archaeology , 20.4 (2016), p. 649. [26] Sophie Dobzhansky Coe, Michael Dobzhansky Coe, ‘Encounter and Transformation’, in The True History of Chocolate (New York: Thames and Hudson Ltd., 2013), p. 95.; Norton, ‘Tasting Empire’, p. 684. [27] Norton, ‘Tasting Empire’, p. 660. [28] Ibid., p. 670. [29] Pedro Carrasco, ‘Indian-Spanish Marriages in the First Century of the Colony’, in Indian Women of Early Mexico, ed. by Susan Schroeder and Robert Haskell (Norman, Okla., 1997), p. 88. [30] Norton, ‘Tasting Empire’, p. 678. [31] Ibid ., pp. 660, 681-684. [32] Carol Benedict, ‘ Early Modern Globalisation and the Origins of Tobacco in China, 1550-1650 ’, in Golden-Silk Smoke a History of Tobacco in China, 1550-2010 , 1st ed. (Berkeley: University of California Press, 2011) , p. 47. [33] Carol Benedict, ‘Introduction‘, in Golden-Silk Smoke a History of Tobacco in China, 1550-2010, 1st ed. (Berkeley: University of California Press, 2011), p. 15; Benedict, ‘Early Modern Globalisation and the Origins of Tobacco in China, 1550-1650’, p. 60; Timothy Brook, ‘School for Smoking’, in Vermeer's Hat the Seventeenth Century and the Dawn of the Global World (London: Profile Books, 2009), pp. 119- 123. [34] Benedict, ‘ Early Modern Globalisation and the Origins of Tobacco in China, 1550-1650 ’, p. 61. [35] Matthew P. Romaniello, ‘Through the Filter of Tobacco: The Limits of Global Trade in the Early Modern World’, Comparative Studies in Society and History , 49.4 (2007), p. 918-920. [36] ‘In the past year 1633/34 by the decree of the great Sovereign, Tsar, and Grand Prince of all Russia Mikhail Fedorovich of blessed memory, a strict prohibition on tobacco was enacted in Moscow and in the provincial towns on pain of the death penalty, that Russians and various foreigners were not to keep tobacco in their possession anywhere, to sniff it, or to trade in tobacco.’ - Quoted from Richard Hellie, ‘Chapter 25. Article 11.’, in The Muscovite Law Code (Ulozhenie of 1649, Pt. 1 : Text and Translation) (C. Schlacks Jr.: Irvine, California, 1988), http://individual.utoronto.ca/aksmith/resources/ulozh.html , [Accessed 21 March 2022]. [37] Romaniello, ‘Through the Filter of Tobacco’, p. 929-933. [38] Robert Brenner, and American Council of Learned Societies, ‘ Government Privileges, the Formation of Merchant Groups, and the Redistribution of Wealth and Power, 1550-1640 ’, in Merchants and Revolution Commercial Change, Political Conflict, and London's Overseas Traders, 1550-1653 (London ; New York: Verso, 2003), p. 79. [39] Maria Salomon Arel, ‘Introduction’, in English Trade and Adventure to Russia in the Early Modern Era: The Muscovy Company 1603-1649 ( Lexington Books: New York 2019), p. 2; A rtur Attman, ‘The Russian market in world trade, 1500-1860’, Scandinavian Economic History Review , 29.3 (1981), p. 185. [40] Clare Griffin, ‘Russia and the Medical Drug Trade in the Seventeenth Century’, Social History of Medicine: The Journal of the Society for the Social History of Medicine , 31.1 (2018), p. 16-17. For more on the history of rhubarb trade and connections it formed between China, Russia and Europe: Clifford M. Foust, Rhubarb: The Wondrous Drug (Princeton University Press: Princeton, New Jersey, 1992).
- Popular Culture, Collective Memory & 'Great (Wo)man History?: Decoding a Nineteenth-Century Scottish Biograph of Joséphine de Beauharnais
The most recent major addition to the popular cultural canon of the Napoleonic period is, at the time of writing, Ridley Scott’s historical epic Napoleon (2023). Overlooking the furore generated by its sensationalist representation of the era, the film is interesting for its original narrative angle (if not its historicity). Nominally a biopic, Scott’s history really pivots on the relationship of Napoleon and his first wife, Joséphine de Beauharnais. The Empress’s centrality here can be contrasted with her more modest presence in Gance’s landmark Napoléon (1927). She does not feature at all in Bondarchuk’s War and Peace (1967) nor Waterloo (1970). [1] Much more can be said about the medium of film, however depictions (or omissions) of Joséphine in any popular media say something about her varying placement in cultural memory: she can be central or peripheral, active or passive, remembered or forgotten. Notwithstanding, the abiding tropes of how we remember the Napoleonic period and its actors have determined the possibilities of her representation. As the wife of the ‘central’ figure of the period, her role has been defined by this relationship, and thus by patriarchal popular conceptions of marriage, gender and power. This allows for her cultural pigeon-holing either as an aloof, sidelined figure simply present to socially and sexually gratify Napoleon, or as an ominous, withdrawn schemer who has renounced her femininity to subvert patriarchal power, to take two extremes. Analysis of any source concerning Joséphine can suffice being grounded in the relationship between the nuances of her representation and the discourses from which they are constructed. Such an analytical methodology in isolation, however, would be a waste of a text like The History of the Empress Josephine , as it would revolve around the subject and the author while ignoring the audience – in this case the consumers of popular history in mid-nineteenth century Britain. There has been enough ‘high history’ written concerning the universe of the elite personalities of the Napoleonic period to justify better attending to non-elite groups like these. This article’s analysis is predicated on two conceptual steps to achieve this. Firstly, this article borrows from the subdiscipline of ethnographic history in working against-the-grain to access this chapbook’s audience through its text and, primarily, its context. Here, it is impossible not to note the influence of Robert Darnton, who's against-the-grain readings of popular texts in pre-Revolutionary France has been foundational to ethnographic history and pivotal in cultural history. [2] Secondly, it takes the access granted by ethnographic history and applies methods from memory studies to analyse how the chapbook’s readers and writers grappled with the memory of the Napoleonic period and its ‘ensemble personalities’ as they passed from the political now to the historical then . In doing this, we circle back to Joséphine herself, her equivocal place as a ‘great woman’ in a ‘great man history’, and the ambivalent implications of this for her popular cultural memory. What does it mean for history, literature, music or film to be popular? Given this article’s predication on such a thing as popular culture, it is worth outrightly justifying its existence. The notion presupposes that separate social spheres have distinct cultural realms: that the domain of popular culture is the preserve of a non-elite social category, defined in contrast with an elite one. Insofar as there is a non-elite sphere, it necessarily has a cultural domain as a product of its shared experience. [3] The problem lies in drawing its borders. In the nineteenth century’s memorial field of the Napoleonic period, what constitutes the cultural repositories of these spheres is easy to conceive: ‘elites’ consumed the paintings of David, the writings of Clausewitz, Constant and Carlyle, and the music of Beethoven; ‘non-elites’ consumed Dumas and Dickens, cartoons and propaganda, and whatever other irretrievable unknowns that circulated in the translucency of the masses. But this is a dichotomy that deserves some criticism, as illustrated by the fact that several of the examples were not always in the categories we now place them in. Elite and popular culture do not exist independently, rather share a blurred and contested boundary and continuous interchange. [4] Such a conception was first articulated by Antonio Gramsci, who demonstrated how popular culture’snebulosity was a product of its state as a ‘compromise equilibrium’ between resistive and oppressive forces vying for hegemony. [5] This framework is useful for us in that it accepts the existence of such thing as a ‘popular culture’, while accommodating forthe fluidity in which historical, biographical or literary texts belong to it. Chapbooks, a diverse and widespread form of ephemeral street literature, are a good example of a medium of popular culture in nineteenth-century Britain. [6] The History of the Empress Josephine, the Consort of Napoleon Bonaparte is a 24-page chapbook first printed in Edinburgh between 1839 and 1858. [7] It was published by James Brydone, a local printer whose corpus consists of lowbrow, predominately biographical popular histories, covering figures such as William Wallace, Columbus, and Guy Fawkes. [8] The author is ultimately unknown, however there are several indicators as to how Brydone’s account was constituted. Firstly, this chapbook relies heavily on at least one contemporary biography of Joséphine: John Smythe Memes’ Memoirs of the Empress Josephine (1835). Memes, who himself was Scottish, is mentioned three times in The History of theEmpress Josephine , and there are some direct quotations from his book. [9] There are even some parts of Brydone’s which are verbatim taken from Memes’ without quotation, including an identical metaphor likening Joséphine’s effect on Napoleon to ‘the harp of David playing on the chest of the king of Israel.’ [10] How involved Memes was involved in the production of thischapbook, or even if he was at all, is impossible to ascertain. It is also difficult to gauge whether any other books were used. Theoretically the Francophonic volume Mémoires sur l’Impératrice Joséphine (1828) by Georgette du Crest, or the English-translated Mémoires historiques et secrets de l'impératrice Joséphine (1820) by Marie Anne Lenormand, could have beenconsulted without quotation or plagiarism. [11] But what is clear is that Memoirs of the Empress Josephine greatly influenced The History of the Empress Josephine , so much so that the latter may simply be a condensed and streamlined version of the former. That chapbooks were often reproductions of professional publications attests to the fluidity of the relationship between elite and non-elite cultural fields. However, the generation of popular ephemeral histories was not an arbitrary cloning process. The compilation and repackaging of historical accounts to appeal to a non-elite readership required Brydone and others like him to be attuned to the demands of customers, and creative in the ways they met these demands in a swelling, competitive market. It has been easy for historical and literary academics to sneer at popular literature whose creators did not have access to an abundance of primary and secondary sources, nor any concept of academic conduct and intellectual property. But more recent scholarship has recognised that chapbooks and similar media are valid members of their contemporary intellectual nexus, simply in an adaptedcontext. [12] If anything, Brydone and others like him were performing an important function in expanding the reach of professional history -writing beyond the horizons of aristocratic or gentrified readerships. This is not to say, however, that the dissemination of history from elite to non-elite spheres was the prerequisite for the existence of history’s popular dimension. Such a model of ‘ trickle-down history’ neglects the capacity of non- elites to make organic, original contributions to the cultivation of the past. As little as we know about The History of the Empress Josephine , its archival survival hints that it was not a commercial failure, and hence that there was a historical appetite among the masses. Its very circulation, significant or not, attests to the fact that Brydone thought there was a market for a biograph of a French Empress in mid-nineteenth-century Edinburgh. This fact – that makes the chapbook striking in the first place – is the entry point into the historical imaginations of the people it was written for. Its readers were not passive recipients of repackaged ‘ high history’, but customers with an autonomous stake in history- writing. This stake points to an animate collective memory of the events and figures of recent Napoleonic history in this seemingly detached community. In the late 1980s, Pierre Nora originated the term lieux de mémoire to describe things – in the most encompassing sense ofthe word – that exist at the junction of history and memory. Nora articulated lieux de mémoire as ‘material, symbolic and functional’ artefacts that preserve something of the past that would otherwise be lost, serving as containers of collective memory. [13] Innovations in France were contemporarily mirrored in Germany, where cultural historians Aleida and Jan Assmann were formulating a distinction within collective memory between what they termed ‘communicative’ and ‘cultural memory’. The former refers to memory as it is transmitted socially, most typically orally. It does not require its invokers to have a literal biological recollection of the subject, only an interpersonal connection to it. It is necessarily fleeting: Assmann and Assmann speculate that communicative memory does not last longer than a century. [14] The latter refers to memory as it is crystallised through cultural media such as texts, objects, rituals, institutions, praxes and art. Mnemonics such as these are still ‘memory’ in the sense that they, like exchanges of communicative memory, are a basis for the production of collective identity. [15] These mnemonics are lieux de mémoire in all but name. Cultural memory does not have a shelf life because it is memory that has been concreted and isolated from the remembering collective. [16] Therefore, collective memory, if it is to survive, must manifest itself sufficiently in the collective’s culture before its social communication has faded. That memory can be cultural is the reason an ethnographic approach to memory studies is viable. The History of the Empress Josephine is a lieu de mémoire and an object of cultural memory in that it is a product of the crystallisation of collective memory. It was published between 1839 and 1858, while Joséphine lived from 1763 to 1814, meaning that the displacement between the memorial subject and objectified mnemonic is between 25 and 95 years (discounting however long the chapbook circulated after publication). The chapbook’s material ‘life’ thereby coincides with the decay of the communicative memory of the Napoleonic period. Like other Europeans of the generation, Edinburghians would have been highly familiar with the wars of 1793-1815 thanks to the unprecedented extent they subsumed society across the continent. When George Steiner wrote in 1971 that ‘it is the events of 1789 to 1815 that interpenetrate common, private existence with the perception of historical process’, he meant that perceiving oneself as an external spectator to history was becoming impossible from the turn of the nineteenth century. [17] Steiner’s assertion is an articulation of the Napoleonic Wars as the first ‘total war’ before the term was applied. [18] Many mid-nineteenth-century Britons had experienced war more acutely than any people ostensibly removed from it had ever before, which, coupled with improving literacy and better access to literary resources, created a cultural climate conducive to the proliferation of lieux de mémoire like The History of the Empress Josephine . [19] But the ‘total’ political immersion of the Napoleonic period itself was never going to last into the atmosphere of relative nonbelligerency that followed 1815. Edward A. Freeman’s adage ‘history is politics past; politics is history present’ is helpful for imagining the postwar demobilisation of Europe’s popular politics. In wartime, British street literature was populated by cartoons, caricatures and propaganda; in peacetime, they were more-or-less replaced with a diversity of somewhat new forms, including literary-historical and biographical ones. [20] Representations passed from the political into the historical. This is demonstrated in The History of the Empress Josephine , where the subject is presented in a mostly neutral, if not broadly positive light. Stylistically, the chapbook could, in parts, pass for a favourable modern biography. Here, Joséphine is a ‘distinguished lady’ and a ‘gentle spirit’; she is an avid reader, a talented dancer, musician and knitter, not to mention an expert botanist; and is even imbued with prophetic powers! [21] More importantly, Joséphine’s character is constructed with self-standing agency. She is not merely the passive centre around which the narrative arc is scaffolded, rather she is a dynamic actor in its unfolding: she is able to evaluate and decide on Napoleon’s qualities as a husband, to boldly claim that she ‘will yet be Queen of France’, and to have the ‘masculine spirit’ of Lady Macbeth. [22] The chapbook’s presentation of Joséphine in this way can be contrasted with the ways her character was constructed in wartime. One cartoon from 1804 shows Joséphine’s life in a series of vignettes, including as ‘A Prisoner’, as ‘[Paul] Barras’s Mistress’ and ‘A Loose Fish’ – a euphemism for a promiscuous woman. [23] Another from 1805 depicts her dancing naked before Barras, while an 1806 etching shows the Emperor and Empress being shuttled to hell by a host of demons. [24] The catalogue of Joséphine’s representations as unladylike, licentious and outright evil is expansive, which begs the question why and how did these transmute to the more neutral, even commendatory representations espoused by the likes of The History of the Empress Josephine ? Put another way, how did the political now become the historical then ? The answer lies in Assmann and Assmann’s equation of communicative and cultural memory. It is significant that Joséphine had divorced Napoleon in 1810 and died in 1814. Each of these events went some way to distance her from the kaleidoscopic political theatre of wartime Europe, reflected in her subsequent disappearance from propagandistic representations on all sides. Joséphine without Napoleon had lost political currency either side of the Channel; her stock position – with all the possibilities of representation it entailed – was passed on to the next Empress, Marie Louise. In the popular domain there was no reason to culturally retain the undermined construct ofJoséphine’s personality, and she hence became a representational blank canvas as soon as her communicative memory died. This death was doubtless expedited by the lasting peace of 1815, which meant that the social collective in Edinburgh or anywhere else could move on from wartime discourse. This article has established that ‘popular culture’ can exist, however permeable it might be, and superficially employed the concept through terms like ‘non-elites’, ‘the popular domain’, ‘the masses’, ‘the people’, ‘Edinburghians’ and ‘Britons’. But these terms deserve further interrogation in the context of The History of the Empress Josephine . In short: who was reading our chapbook? We know that chapbooks were a fundamental element of the booming street literature industry in Edinburgh throughout the nineteenth century. [25] This, and the survival of much of Bryone’s oeuvre despite its ephemerality, has underpinned the assumption of this article that this chapbook was, to some degree, popular. Street literature’s commercial base, thanks to its inexpensiveness, was in the lower socioeconomic strata of urban Britain, so our chapbook’s readership was the same. [26] Beyond these limited conclusions, only against-the-grain reading can access the readership further. Further research is needed into the gendered dynamics of street literature consumption, and The History would be an interesting source to this end. Being a biograph of a woman it is rare among contemporary chapbooks, while it is seemingly unique in Brydone’s catalogue of publications. For these reasons, it is possible that this chapbook was written and printed with a female audience in mind. It is arguable that, despite her undeniable imbuement with masculine qualities at points within the narrative, the account never fails to return to Joséphine’s quintessential femininity. She is, ultimately, ‘gentle and elegant’, full of ‘gratitude and tenderness’, and of ‘feeble character’. [27] Whether such patriarchal representations appeal to nineteenth-century British women is, unsatisfyingly, a question simultaneously more specific and expansive than the scope of this article. Indeed, it might be the case that Joséphine’s masculinities are written with the female readership in mind. Or even, and perhaps most compellingly, what might have appealed to women was the fusion of requisite placidity with subversive masculinity in a formula customised by overarching discourses of power and gender. In any case, how far re-reading The History with a gendered lens can reveal anything about how many women read it, or whether this extrapolation is against-the-grain taken too far, is debatable. A concluding hint on this point can be taken from outside the popular domain. In the mid-nineteenth century, Joséphine was a common character in media consumed by upper-class women. One magazine from 1841 bears an uncanny resemblance to our contemporary chapbook, in that it is a biograph of Joséphine with similar structure, themes and events. [28] Given it could owe its account to overlapping sources with The History , its attestation that there was an elite female interest in Joséphine’slife might be extendable to non-elites. Such conclusions are as exciting as they are speculative. As far as there is a memory of the Napoleonic period in Britain today, it is typified by a particular kind of militaristic, testosteronal and teleological conception of generals, soldiers, battles and campaigns. As Étienne François writes, the era ‘has come to be seen as an inexhaustible fund of adventure, military glory... heroism and tragedy.’ [29] The nature of this memory is best explained by its incubation period of the long nineteenth century. The historiography of the Napoleonic period in this time – aside from a type of technical, inflexible military history which has been since rendered anachronistic – was dominated by personalities. This is epitomised by the eponymous personality of the period, who was a major inspiration in thedevelopment of a kind of history which used major individuals as both the frames and units of its analysis.30 Joséphine was certainly a major personality of the period, however she does not easily conform into ‘great man history’ for obviousreasons. Though much of The History of the Empress Josephine is reminiscent of a Carlylean biography, it ultimately embodies her ambivalent cultural memory. Joséphine was formerly a political foe but is now a historical artefact: a representational blank canvas for popular culture to paint over. The History reveals the possibilities available in doing so. Joshua Redden is currently in his 2nd year of a BA in History at Warwick University. Notes: [1] Of course, Joséphine’s omission in War and Peace is a decision on the part of Tolstoy, rather than Bondarchuk. Moreover, some might argue that it would be inappropriate for Joséphine to feature in Waterloo , given she died before the events depicted. However, her complete absence (in physicality or name) from Bondarchuk’s constructed universe is an artistic decision. [2] Robert Darnton The Great Cat Massacre and Other Episodes in French Cultural History (New York, NY: Basic Books, 1984); RobertDarnton, The Literary Underground of the Old Regime (Cambridge, MA: Harvard University Press, 1982). [3] John Storey, Cultural Theory and Popular Culture, An Introduction (New York, NY: Pearson, 1997), p. 5. [4] Ibid., p. 10. [5] Antonio Gramsci, ‘Hegemony, Intellectuals and the State’ in John Storey (ed.), Cultural Theory and Popular Culture: A Reader (Harlow: Pearson, 2009), p. 161. [6] Adam Fox, ‘“Little Story Books” and “Small Pamphlets” in Edinburgh, 1680 -1760: The Making of the Scottish Chapbook’, The ScottishHistorical Review , 235 (2013), p. 207. [7] The History of the Empress Josephine, the Consort of Napoleon Bonaparte , published by J. Brydone, 1839-58, National Library of Scotland, Edinburgh. [8] The History of the Scottish Patriot, Sir Wm. Wallace: Knight of Ellerslie ; The History of Columbus, Discoverer of America ; Guy Fawkes, orthe History of the Gunpowder Plot, published by J. Brydone, 1839- 1858, National Library of Scotland, Edinburgh. [9] The History of the Empress Josephine , p. 3, p. 11, p. 22 [10] Ibid., p. 4; John Smythe Memes, Memoirs of the Empress Josephine (New York, NY: Harper & Bros., 1835), p. 19. [11] Stéphanie Félicité (Georgette) du Crest, Mémoires sur l’Impératrice Joséphine, ses Contemporaires, La Cour de Navarre et de laMalmaison , volume 1 (Paris: Ladvocat, 1828); Marie Anne Lenormand, Mémoires historiques et secrets de l'impératrice Joséphine, Marie-RoseTascher-de-la-Pagerie, première épouse de Napoléon Bonaparte , trans. Jacob M. Howard (Paris: 1820). [12] Roy Bearden-White, ‘ A History of Guilty Pleasure: Chapbooks and the Lemoines’, The Papers of the Biographical Society of America , 103(2009), p. 286. [13] Pierre Nora, ‘ Between History and Memory: Les Lieux de Mémoire ’, Representations , trans. Marc Roudebush, 26 (1989), pp. 18-9. [14] Jan & Aleida Assmann, ‘ Collective Memory and Cultural Identity’, New German Critique , trans. John Czaplicka, 65 (1995), pp. 126-7. [15] Ibid., p. 128. [16] Ibid., p. 129. [17] George Steiner, In Bluebeard’s Castle: Some Notes Towards the Re-definition of Culture (London: Faber & Faber, 1971), p. 19. [18] David A. Bell, The First Total War: Napoleon’s Europe and the Birth of Modern Warfare (London: Bloomsbury, 2007), pp. 7-9. [19] Philip Dwyer & Matilda Greig, ‘Memoirs and the Communication of Memory’ in Alan Forrest & Peter Hicks (ed.) The Cambridge History of the Napoleonic Wars (London: Cambridge University Press, 2022), p. 244. [20] Pascal Dupuy, ‘ The Napoleonic Wars in Caricature’ in Forrest & Hicks, The Cambridge History, pp. 378 -9. [21] The History of the Empress Josephine , p. 2, p. 24, pp. 3-4, pp. 4-5. [22] Ibid., p. 8, p. 7, p. 11. [23] G. M. Woodward, The Progress of the Empress Josephine , 1804, colour etching, 24.6 × 34.9 cm, etched C. Williams, Bodleian Libraries, Oxford. [24] James Gillray, Ci-Devant Occupations – or –: Madame Talian and the Empress Josephine Dancing Naked Before Barrass in the Winter of1797. – A Fact! , 1805, colour etching, 31.6 × 45.7 cm, etched James Gillray, Bodleian Libraries, Oxford; (?) Roberts, Needs Must, When theDevil Drives , colour etching, 24.9 × 33.0 cm, Bodleian Libraries, Oxford. [25] Fox, ‘Little Story Books’, p. 229. [26] Bearden-White, ‘Guilty Pleasure’, p. 285. [27] The History of the Empress Josephine , p. 9, p. 13, p. 8. [28] ‘Memoir of the Empress Josephine, First Wife of Napoleon Bonaparte’, Court and Lady’s Magazine, Monthly Critic and Museum , 19(1841), London. [29] Étienne François, ‘ The Revolutionary and Napoleonic Wars as a Shared and Entangled European lieu de mémoire ’ in Alan Forrest,Étienne François & Karen Hagemann (eds.), War Memories: The Revolutionary and Napoleonic Wars in Modern European Culture (Basingstoke: Palgrave Macmillan, 2012), p. 400.
- The Influence of Gender Stereotypes on Crime in Early Modern Europe
During the early modern period, gender issues within society caused the criminal justice system and its authorities to target those who challenged traditional codes of behaviour: women, especially, were seen as a threat to the social order. This essay will look at why women became penalised under the criminal justice system in England and Europe. The differences in crime rates and the operation of justice will also be explored, in order to provide an insight into how crime rates and the authorities' response to crime differed locally and regionally in the early modern period. Through a range of historiography, this essay will evaluate the contrast in how men and women were penalised under the justice system by considering how female crime was deemed to be “rebellious” whilst male crime was “normalised”. The wider social, political and economic context behind crime rates during this period will also be considered, as it is clear that crime has not always been solely a gendered issue. This essay will therefore argue that although crime is gendered to a large extent, the assumption, by many historians, that crime rates were higher amongst men is incorrect. Crime was just as common among men as it was among women. However, due to gendered stereotypes and traditional values, women were more likely than men to be accused of crimes and penalised for them. The operation of the criminal justice system varied locally and regionally during the early modern period. Crime rates tended to be higher in urban settings compared to their rural counterparts. For this reason, geographical, social and political factors had an influence on male and female crime rates across Europe. In England, the courts followed the rule of “Common Law”, which meant that the reputation and past behaviour of suspects were considered credible evidence in deciding verdicts during court trials. A woman’s facial expressions and gestures were also considered important evidence in court. As a result, women had very limited agency under early modern law. [1] For example, Mary Janson, who was accused of stealing, was ‘bound by recognisance not explicitly [for] receiving stolen goods but because she was of evil fame and very bad behaviour’. [2] Gendered attitudes towards a woman’s weak “nature” and behaviour, therefore, meant court trials saw women as causes of the threat to the social order. The rise in urbanisation during the early modern period and the move towards more centralised and systemised states also caused many people to feel anxiety towards the order and control of the country. Anxiety around the breakdown of the patriarchal order, as a result, influenced how women were documented in court records and is one of the most predominant reasons why evidence of women’s involvement in crime is limited. It is therefore important as historians, when looking at criminal evidence, to consider not only the limitation on the quantity of the records but also how a woman’s past behaviour and performance of societal roles all influenced the outcome of court trials. The lack of awareness and consideration of these factors led many contemporary historians to view crime solely as “history from below” and not focus on gender relations, forming the assumption that crime was mainly masculine. [3] Many historians, therefore, only focused on male crime and did not consider women’s relationship with crime and the courts during the early modern period. However, this essay will prove that crime was both common amongst men and women and that women were treated very differently under the legal court system than men. One of the most common criminal offences for which women were accused and penalised for was “scolding”. As defined by early modern law, a “common scold”, ‘was a habitually rude and brawling woman whose conduct was subject to punishment as a public nuisance’. [4] Legal definitions, however, ‘were often easily manipulated and in practice could encompass a wide variety of alleged activity’. [5] Although men could be accused, “scolding” was a gendered crime and mainly committed by women. The rise in scolding accusations also took place during the period when witchcraft cases amongst women were at their peak. [6] This, therefore, caused an increase in methods and punishments used by the authorities to maintain the social order. The most popular punishments for “scolding” were the “ducking stool” and the “scold’s bridle”. Although the scold’s bridle was more commonly used, both involved the accused being publicly shamed and potentially ostracised from society. Figure 1, for example, depicts the experience of Ann Bidleston, who is shown as B in the image, who was punished for “scolding” in Newcastle. The scold’s bridle ‘was musled over the head and face with a metal tongue forced into her mouth which forced the blood out, she was then walked through the streets of Newcastle’ by Robert Sharp, who is shown by A in the image. [7] It is clear that although men and women were both accused of “scolding”, harsher punishments were inflicted upon women who rebelled against the social order. Women of a lower social status were also more likely to be convicted of “scolding” than women of a higher rank, as outcasts of society tended to voice their opinions against the authorities within their communities. [8] This idea that women gossiped amongst their neighbours and therefore went against the traditional behavioural expectations of society, supports the argument that crime was gendered to a significant extent in the sixteenth and seventeenth centuries. Witchcraft was also a gendered issue, especially regarding how authorities responded to both male and female cases. Accusations of witchcraft were commonly associated with female crime and women as a result of neighbourhood gossiping. Although men were sometimes accused and tried for witchcraft, it tended to be women who were proven guilty and therefore penalised. Societal ideas of behaviour and neighbourliness amongst women were influential in the rise of witchcraft cases during the early modern period. This was because women were expected to conform to the rules of society, and anything that went against the patriarchal order was seen as rebellious behaviour. As a result, ‘male criminality is normalised whilst female criminality is seen in terms of dysfunction’. [9] Outcasts of society, especially widows, were, therefore, more susceptible to accusations of witchcraft as they went against female behavioural norms. Women were also more likely to face the death penalty once guilty of a crime, due to the expectation to uphold their reputation and honour. For example, after the trial of Effam Mackallean, who was found guilty in six cases of witchcraft, the assize ruled that ‘she shall be burnt quick, according to the laws of this realm’. [10] It was also likely that her death took place publicly, as female punishments tended to take place in front of the community in an attempt to not only humiliate the accused but also reinforce gender norms and behavioural expectations. Thomas Harvey, for example, was ‘by the command of the then Baron Nicholas…. Committed at Exeter whereby he [was] deprived of his liberty’. [11] This is not surprising as men accused and tried for witchcraft, and crime, in general, tended to face more leniency from the courts. However, due to the nature of the justice system, witchcraft trials are not always completely reliable, as the court tended to use leading questions and intimidation techniques to get a confession. It is, therefore, hard to determine whether certain witchcraft cases were based upon accusations of neighbourhood grievances or if the accused was guilty. Despite this, witchcraft cases provide historians with an insight into how female crime was viewed by the courts as well as society during this period. Theft and the possession of stolen goods were also common in female crime rates during the early modern period. However, due to statistics on female crime often being discredited and overlooked, many historians formed the opinion that female theft was of less significance and therefore categorised it as a “petty crime”. [12] Despite this, women were charged and prosecuted for property crimes such as stealing goods and household burglaries, Daniel Defoe’s Moll Flanders provides an example. The protagonist, who is known within her community as being in common occurrence with the court, is ‘indicted for felony and burglary; for feloniously stealing two pieces of brocaded silk, value £46, the goods of Anthony Johnson, and for breaking open the doors’ and as a result is punished with the death penalty. [13] Despite being a piece of literary fiction, Defoe’s novel provides an almost accurate representation of female crime in London. His work explores women’s relationship with crime and the courts during the early modern period and provides a historical insight into how female thieves worked within their communities. Like Defoe’s characters, it was not uncommon for female thieves to work and learn together in organised groups, ‘the comrade she helped me to dealt in three sorts of craft, viz., shop-lifting, stealing of shop books and pocketbooks and taking off gold watches from the ladies’ side’. [14] Men and women also differed not only in ‘their choice of partners in crime’, with women ‘50% more likely to work with other women and 25% with men’, but also in the retributions they received. [15] John Steers, who was indicted for grand larceny for stealing a range of goods in 1686, was given the verdict of ‘not guilty’ due to ‘giving an account of his Reputation’. [16] However, despite women being faced with harsher punishments for crime, ‘women turned to theft for the same reason men stole in this period – largely as a means of survival’, the difference was that due to behavioural codes of society, property and theft-related crimes committed by men were “normalised”. [17] Women turned to the same practices as men to survive in the economic, political and social hardships of the early modern world whilst facing the prejudices that came with not conforming to stereotypical female roles. Crime was, therefore, to a large extent, moulded around gender issues that took place throughout the seventeenth and eighteenth centuries. The extent to which crime was gendered in the early modern period also differed regionally. This is partly due to the differing operation of justice and moral values across Europe. Women’s crime rates tended to be higher in towns and cities due to women having more agency and leading more public lives. [18] In the countryside, where female agency was more restricted, however, women tended to be less involved in crime as they lived more restricted and private lives. Through his study on crime rates in Surrey and Sussex during the eighteenth century, John M. Beattie found that women ‘also accounted for a much higher proportion of the total crime in the city than the countryside’. [19] As a result of this, scholars viewed female crime as much higher than men. However, when looking at court records from rural areas, male crime rates, in comparison to female crime rates, were much higher, as male crime tended to be done outside the private life of the home, and therefore was more likely to be called upon by the authorities. Differing moral values and societal ideals also impacted crime rates not only in England but also in other parts of Western Europe. ‘The gender gap in early modern Frankfurt [was] more profound than in other urban centres in the Dutch Republic or United Kingdom’, for this reason, crime in the city of Frankfurt was to a large extent influenced by gender issues, and female crime rates were much lower. [20] It is, therefore, clear that the contrast in patriarchal ideals and behavioural stereotypes across early modern Europe influenced not only crime statistics among men and women but also the extent to which crime itself was gendered. In conclusion, crime during the early modern period was gendered to a significant extent. Although the crimes committed by men and women were, to a considerable degree the same, the ways men and women not only practised crime but were punished for crime was very different. Attitudes towards crime were, therefore also gendered not just within early modern societies but also among historians. As outlined above, many historians overlooked the wider context of crime during this period and formed the assumption that crime was most common amongst men and, therefore “normalised”. However, it has become clear that contemporary historians' lack of awareness of the context of early modern crime and the limited resources on female involvement in crime have influenced the viewpoint that women were not as involved in crime as men. On the contrary, like men, women were involved in crime as a means to survive in the changing environment of early modern Europe. Women were just as likely to be involved in theft and property crimes as men, however, women were treated with less leniency than men and, as shown above, were more likely to face the death penalty. It is, therefore, clear that gender issues in this period, to a considerable extent, heavily influenced not only men’s and women’s intentions behind crime but also how crime was managed by the justice system during the early modern period. Rebecca Colyer is currently in her 2nd year of a BA in History at the University of East Anglia Notes: [1] Jennifer Kermode and Garthine Walker, Women, Crime and the Courts in Early Modern England. (Chapel Hill: University of North Carolina Press, 1995), p. 6. [2] Garthine Walker, Crime, Gender and Social Order in Early Modern England. (New York: Cambridge University Press, 2003), pp. 214-215. [3] Walker, Social Order, pp. 1-3. [4] “Common scold” Collins English Dictionary, accessed 20 December 2022, https://www.collinsdictionary.com/dictionary/english/common-scold . [5] Kermode and Walker, Crime and the Courts, p. 18. [6] Anthony Fletcher, Order and Disorder in Early Modern England. (Cambridge, 1985), p. 119. [7] Ralph Gardiner, England’s Grievance Discovered. (London, 1655), pp. 110-111. [8] Fletcher, Disorder in Early Modern England, p. 120. [9] Walker, Social Order, p. 7. [10] “Report on the trial of Effam Mackallean”, National Archives, accessed 21 December 2022, https://www.nationalarchives.gov.uk/education/resources/early-modern-witch-trials/witches-accused-of-treason/ . [11] “Petition for Thomas Harvey”, National Archives, accessed 22 December 2022, https://www.nationalarchives.gov.uk/education/resources/early-modern-witch-trials/male-witch/ . [12] Walker, Social Order, p. 159. [13] Daniel Defoe, Moll Flanders. (London: Harper Collins Publishers Inc, 2011), p. 245. [14] Defoe, Moll Flanders, p. 172. [15] Kermode and Walker, Crime and the Courts, p. 9, pp. 81-105. [16] Old Bailey Proceedings Online ( www.oldbaileyonline.org , version 8.0, 23 December 2022), February 1686, trial of John Steers (t16860224-21). [17] Lynn MacKay, “Why they stole: Women in the Old Bailey, 1779-1789”, Journal of Social History 32, no.3 (Spring, 1999), p. 1. [18] Manon van der Heijden, Marion Pluskota, and Sanne Muurling, eds. Women’s Criminality in Europe, 1600-1914. (Cambridge: Cambridge University Press, 2020), p. 34, p. 44. [19] John M. Beattie, “The Criminality of Women in Eighteenth-Century England”. Journal of Social History 8, no.4 (Summer, 1975), p. 82. [20] Jeannette Kamp, Crime, Gender and Social Control in Early Modern Frankfurt am Main. (Leiden, The Netherlands, Brill, 2019), p. 63.
- History is an Unreliable Source of Memory
In the intricate dance between memory and history, each partner influences and reshapes the other. This essay discusses the complexities of this relationship in dialogue with two partners: the historiography of memory and a poignant case study: the misremembered death of Catherine ‘Kitty’ Genovese. This essay will trace the evolution of her death into a symbol of moral decline, deeply embedded in both popular imagination and scholarly debate. This essay aims to demonstrate that memory and history are not just interconnected; they are mutually unreliable. The essay concludes by making the case that history can distort memory just as memory can distort history, inviting a re-evaluation of how we approach understanding the past. It is only fitting that our discussion on memory begins with a statement of facts. We know that on 13 March 1964 at 3:20 am, the weather was freezing in Kew Gardens, Queens, New York. We also know that the 1.8 million inhabitants of the borough of Queens would not see a warm day until 18 April, when the temperature would finally climb above 75°F between the hours of 10 am and 6 pm. We consider these meteorological and demographic statements to be facts by virtue of our trust in the records of Newark Liberty International Airport and the United States government. [1] For more than half a century, many held an equally firm belief in the statement that Catherine “Kitty” Genovese was raped and stabbed by Winston Moseley on 13 March 1964 between 3:20 am - 3:52 am while 38 ‘ respectable, law-abiding citizens ’ observed inactively, leaving her to die alone in the stairwell of her own apartment building. They were led to this belief because Martin Gansberg, a journalist working for the New York Times, wrote about what transpired based on eyewitness and first responder reports. [2] Gansberg’s article would do more than reconstruct the events of that early morning; he incepted a picture of uncaring inner-city neighbours, indifferent to the murder and rape of Kitty at their own doorstep, into our collective memories. Although he was not a historian, his narrative would turn into a virulent form of mimesis , spreading a morally defunct view of humanity far and wide. [3] ‘ Kitty’s nightmare has become a symbol for all of us ,’ Harold Takooshian explained in an open forum four decades after the incident, before noting ‘ how dramatically my field of psychology has been changed by her experience .’ [4] Although the open forum did not include historians, we can hear traces of Alessandro Portelli’s argument that monumental memories are the foundation for our identities in Takooshian’s statement. [5] Gansberg’s rendition of Winston Moseley wounded more than his victim. With each slash, Moseley tore as deep into Kitty as he did into our perceptions of ourselves and of humanity as fundamentally good. Who are we if not Kitty’s neighbours, separated only by time and space. In the decades following Kitty’s death, sociologists and psychologists developed numerous behavioural theories, such as the ‘bystander effect,’ to explain the passivity of the witnesses. [6] But the taint of Gansberg’s mimesis would not be washed with explanation, just like the trauma of remembering Auschwitz cannot be healed simply by knowing why it happened. [7] Unexpectedly, in late 2016, The New York Times did something remarkable in an unremarkable manner. It appended a new Editors’ Note to Gansberg’s article, more than half a century after the publication, as follows: Editors’ Note: Oct. 12, 2016 Later reporting by The Times and others has called into question significant elements of this account. […] We now know that much of Gansberg’s retelling of the events of 13 March 1964 was inaccurate. For example, Rachel Manning, Mark Levine and Alan Collins found that an extensive review of trial records did not support Gansberg’s claims about the number of witnesses and their purported inaction. [8] Several others would follow their lead, with numerous journalists, psychologists and historians rushing to correct our collective recollection of the historical facts. [9] Not only had several neighbors intervened directly, breaking up the assault, but Kitty had died in the arms of someone who cared. “ I only hope that she knew it was me, that she wasn’t alone ,” Sophia Farrar remembered thinking as she held Kitty in her last moments. [10] After Sophia’s passing, her daughter explained that her family had “ tried to do what we could to set the record straight ,” but their efforts were ignored. [11] It would not surprise Luisa Passerini that our collective memories of Kitty’s last moments were dictated by a man with a tape-recorder instead of the woman who lived through them. Who tells the story, and how, is of critical importance not least because the narratives we create from memories shape our relationship with history. Conversely, history influences how we remember the past and ourselves, often leading to reciprocal distortions between these twin concepts . [12] That memories can be distorted, wrongly recalled, or false in their entirety is becoming a starting point for many psychologists and neuroscientists. [13] From their perspective, the discussion in this essay need not continue beyond the statement contained in the title. For a historian, the situation is infinitely more nuanced. On the one hand, only the most radical post-modernists would entirely dismiss the value of seeking Rankean objectivity and an Eltonian version of truth regarding the events of 13 March 1964. On the other hand, one can vividly imagine Foucault launching into a labyrinthine dissection of power dynamics woven into Gansberg’s narrative without troubling his audience with the alethiological state of affairs. Similarly, we can almost hear Edward Said laying bare the stereotypes and self-justifications of Gansberg’s narrative, tearing into how Gansberg’s narrative is nothing but a projection of his views of inner-city residents. Given that Gansberg spent most of his life in the tranquil suburb of Passaic, New Jersey, he would have found it easy to concur with Jane Jacobs’ contemporary views on the moral decay and decrepitude that plagued great American cities. [14] As Michel De Certeau said, ‘ history is a product of a place,’ and Gansberg seems to have written his article deep in enemy territory. [15] In reconstructing the events of that early morning, did Gansberg misremember, or did he rely on inaccurate narratives provided by others? Although Portelli hastens to remind us that ‘ there are no false oral sources, ’ Gansberg was acting as an investigative reporter with an intimate relationship to objectivity. [16] Or perhaps the accidental historian’s biases and perceptions of the immorality of inner-city life influenced his narrative choices, gently guiding him to a story that resonated with his own prejudices? We have reason to believe that the policemen responding to the site may have provided, or at least reinforced, the now disproven notion of 38 witnesses, which they acknowledge was ‘ one for the books .’ [17] Had they only known how many, and with what impact. Reflecting on Natalia Zemon Davis’ body of work, we see the craft of historians, both professional and accidental, being shaped by their context and intentions. [18] From this perspective, history is more than a collection of objective truths. It is a complex tapestry of stories, narratives, and myths, where each thread is colored by the historian’s perspective. In Gansberg’s 'return' of Kitty Genovese, facts are supplanted by peg-legged pontification about the moral decrepitude of the people of Queens, thinly veiled as investigative reporting for the audience absorb as truth. In his mediations on the tensions between a historian’s aspiration for fidelity and the inherent limitations of memory, Paul Ricoeur reminds us that ‘t o memory is tied an ambition, a claim—that of being faithful to the past.’ [19] Whether we do so fairly or not seems immaterial, given how Ricoeur continues to state that ‘ we have nothing better than memory to signify that something has taken place .’ [20] In the end, it is the memory of Kitty Genovese that remains with us, not the historical facts of her death. Luisa Passerini has productively sourced history from nothing better than memory for decades, building her craft on the recognition that memory is inherently subjective and malleable. For Passerini, memories of lived experiences are perspectives rather than objective facts; a concept which Gansberg both overextended and neglected. [21] Aleida Assmann traced the historiographical relationship between history and memory as it evolved through three stages: identity, polarization, and interaction. [22] It is in this last, decidedly postmodernist stage of evolution, that we find Jan Assmann’s mnemohistory which emphasizes the past as it is remembered. ‘ The present is “haunted” by the past, ’ Jan Assmann states, implying that our understanding of the past influences much more than just our reconstruction of it. [23] The Gansbergian narrative had decades to weave its tendrils throughout our collective consciousness, creating schemas that define us as much as they delineate us from others. As Siobhan Brownlie has shown, under the right conditions, these schemas, such as the British concept of outsider Normans , can persist for millennia even after the conditions that originated have not existed for generations. [24] Might the Editors’ Note of 12 October 2016 prove powerless in undoing Gansberg’s origin myth of apathy and moral indifference? If so, what would that say about our collective inability to differentiate truth from myths? For some historians, the last question is of little consequence. Raphael Samuel saw history having always been a mixture of knowledge, memory, and myth; a prescient definition of how we see Gansberg’s reporting in hindsight. Brownlie grants Gansberg a modicum of posthumous relief by positing that myths do not need to be accurate conceptions of the past; they only need to serve ‘ present purposes .’ [25] Psychologists have long since identified the causal relationship between autobiographical memory and the formation of a sense of self. [26] Wielding the tools of a literary historian, Nicola King has corroborated the argument that autobiographical memories and myths are used to form the very foundations of our concept of self. [27] Portelli goes even further and posits that this is ‘ what memory is for .’ [28] King found that autobiographical authors return obsessively to themes such as the way things ‘ really were ’ and ‘ what really happened ,’ parsing through knowledge, memories, and origin myths to construe a consistent identity. [29] Although Kitty Genovese’s death is not an origin myth, it is a beginning much like the one which Connerton saw in the execution of Louis XVI. [30] Where Kitty’s life ended, a new autobiographical perception of ourselves began. History not only creates memories; it creates us. Accordingly, it seems that Halbwachs’ statement that ‘ no memory is possible outside frameworks used by people living in society to determine and retrieve their recollections ’ deserves a corollary; no autobiographical concept of ourselves is possible outside the frameworks people use to determine their relationship with history. [31] What we remember of Kitty Genovese’s death is a function of both what we have been given to remember as well as what society expects us to remember. Jan Assmann sees collective memories being dynamically constituted, mediated, and reshaped by acts of communication within the tight embrace of culture. [32] Here Gansberg’s reporting plays an interesting dual role. It was the first act in a long chain of communicative acts, and it is itself a materially manifest form of cultural memory; an artefact that communicates a collective memory, and culturally mediated meanings, simply by existing. Within the framework of Pierre Nora, Gansberg’s article exists also as lieux de mémoire from which a sense of meaning and heritage emanates to those who behold it. [33] We can go even further by following Dominick LaCapra in noticing that the article is something different than just a site of memory. It is a site of trauma. [34] Where Jan Assmann posits that memories are formed through dialogue, Judith Pollmann’s work implies that we need only ourselves and our internal monologue as discussants. [35] As we acknowledge the introspective depths of trauma and internal dialogues, it is tempting to follow Susan Sontag in diverging from Halbwachs by stating that ‘ all memory is individual. ’ As Sontag saw it, collective memories are nothing but stipulated ideologies, and the stipulations that arise from the Gansbergian narrative are ones of original sin. [36] According to it, we are all indelibly tainted onlookers, cursed to indifference and apathy; a stipulation that many internalized through Aleida Assmann’s ‘rites of participation even when the facts of the case never justified the judgment.’ [37] Our tour of what Kerwin Lee Klein aptly calls the memory industry illuminated through the microhistory of Kitty Genovese has brought us to a critical understanding. [38] The consequences of misremembering of her tragic death serves as a compelling argument for recognizing history as an unreliable source for memory. How deeply the Gansbergian narrative has shaped our collective perceptions demonstrates how our autobiographical and self-constructive processes can be manipulated by those who control the historical account. What we believe about ourselves is inherently tied to our understanding of the past, urging us to critically assess the reliability of historical sources as foundations for our self-conception. Just as historians have learned to apply critical caveats and considerations when using memory as a source of history, Gansberg’s inaccurate account of the early morning of 13 March 1964 invites us to embark on a similar journey of caution and critical analysis when relying on history to shape our collective memories. The impact of Kitty Genovese's death on psychology, sociology, and our self-conceptions underscores the necessity of this approach, reminding us that both history and memory are susceptible to distortions. In recognizing this, we are better equipped to navigate the intricate interplay between these two realms, fostering a more nuanced and reliable understanding of our past and, consequently, ourselves. T. Alexander Puutio is currently undertaking an MSt in History at the University of Cambridge (Wolfson College) Notes: [1] United States Bureau of the Census, 1960 Census of Population: New York (Washington, D.C.: U.S. Government Printing Office, 1963), p. 3; and NOAA, Daily Summaries Station Details, Newark Liberty International Airport 1893-2024, https://www.ncdc.noaa.gov/cdo-web/datasets/GHCND/stations/GHCND:USW00014734/detail (accessed 25 May 2024) [2] Martin Gansberg, ‘37 Who Saw Murder Didn’t Call the Police; Apathy at Stabbing of Queens Woman Shocks Inspector’, The New York Times , 27 March 1964, https://www.nytimes.com/1964/03/27/archives/37-who-saw-murder-didnt-call-the-police-apathy-at-stabbing-of.html (accessed 25 May 2024) [3] Hayden White, ‘The Question of Narrative in Contemporary Historical Theory’, History and Theory , Vol. 23, No. 1 (1984), p. 3. [4] Harold Takooshian and others, ‘Remembering Catherine “Kitty” Genovese 40 Years Later: A Public Forum’, Journal of Social Distress and the Homeless , No. 14 (2005), p. 68. [5] Alessandro Portelli, ‘On the Uses of Memory: As Monument, As Reflex, As Disturbance’, Economic and Political Weekly , Vol. 49, No. 30 (2014), p. 43. [6] Joseph W. Critelli and Kathy W. Keith, ‘The Bystander Effect and the Passive Confederate: On the Interaction Between Theory and Method’, The Journal of Mind and Behavior , Vol.24, No. 3/4 (2003), pp. 255–64. [7] Dominick LaCapra, History and Memory after Auschwitz (Ithaca, NY: Cornell University Press, 2018), doi:10.7591/9781501727450; Dominick LaCapra, Representing the Holocaust: History, Theory, Trauma (Ithaca, NY: Cornell University Press, 1994). [8] Rachel Manning, Mark Levine, and Alan Collins, ‘The Kitty Genovese Murder and the Social Psychology of Helping: The Parable of the 38 Witnesses’, The American Psychologist , Vol. 62, No. 6 (2007), pp. 555–562. [9] Marcia M. Gallo, ‘No One Helped’: Kitty Genovese, New York City, and the Myth of Urban Apathy , 1st edn (Ithaca, NY: Cornell University Press, 2015); Marcia Gallo, ‘The Parable of Kitty Genovese, the New York Times, and the Erasure of Lesbianism’, Journal of the History of Sexuality , Vol. 23 (2014), pp. 273–94; Rutger Bregman, Humankind: A Hopeful History , First English-language edition (Boston, MA: Little, Brown and Company, 2020); and Kevin Cook, Kitty Genovese: The Murder, the Bystanders, the Crime That Changed America , Kitty Genovese: The Murder, the Bystanders, the Crime That Changed America (New York: W W Norton & Co, 2014) among many others. [10] Sam Roberts, ‘Sophia Farrar Dies at 92; Belied Indifference to Kitty Genovese Attack’, The New York Times , 2 September 2020 https://www.nytimes.com/2020/09/02/nyregion/sophia-farrar-dead.html (accessed 25 May 2024) [11] Michael Gannon, ‘Sophia Farrar Dead; Held Dying Genovese ‘ Queens Chronicle, 11 September 2019, https://www.qchron.com/editions/queenswide/sophia-farrar-dead-held-dying-genovese/article_94c8af6b-52fd-55d4-b60a-b714c731694f.html (accessed 19 May 2024) [12] Geoffrey Cubitt, History and Memory (Manchester University Press, 2007). p.27 [13] Joyce W. Lacy and Craig E. L. Stark, ‘The Neuroscience of Memory: Implications for the Courtroom’, Nature Reviews. Neuroscience , Vol. 14, No. 9 (2013), pp. 649–658. [14] Jane Jacobs, The Death and Life of Great American Cities , 1992 edition (London: Vintage Books, Random House, 1961); Search for Martin Gansberg’s records," Ancestry.com , https://www.ancestry.com (accessed 19 May 2024) [15] Michel de Certeau, The Writing of History (New York: Columbia University Press, 1992), p. 64. [16] Alessandro Portelli, ‘The Peculiarities of Oral History’, History Workshop , Vol. 12, 1981, p. 100. [17] Kevin Cook, What Really Happened The Night Kitty Genovese Was Murdered?, 2014 https://www.npr.org/2014/03/03/284002294/what-really-happened-the-night-kitty-genovese-was-murdered (accessed 31 May 2024) [18] Natalie Zemon Davis, The Return of Martin Guerre (Cambridge, MA: Harvard University Press, 1984) [19] Luisa Passerini, Memory and Utopia: The Primacy of Intersubjectivity (Oxford: Routledge, 2007). [20] Paul Ricœur, Memory, History, Forgetting (Chicago: The University of Chicago Press, 2004) p. 39. [21] Ibid. p. 25. [22] Aleida Assmann, ‘Transformations between History and Memory’, Social Research , Vol. 75, No. 1 (2008), pp. 49–72. [23] Jan Assmann, Moses the Egyptian: The Memory of Egypt in Western Monotheism (Cambridge, MA: Harvard University Press, 1997). p.9. [24] Siobhan Brownlie, ‘Does Memory of the Distant Past Matter? Remediating the Norman Conquest’, Memory Studies , Vol. 5, No. 4 (2012), pp. 360–77. [25] Raphael Samuel, Theatres of Memory (London: Verso Books, 1994) pp. 443-444; Brownlie, 'Does Memory', p. 375 [26] Stanley B. Klein and Shaun Nichols, ‘Memory and the Sense of Personal Identity’, Mind , Vol. 121, No. 483 (2012), pp. 677–702. [27] Nicola King, Memory, Narrative, Identity: Remembering the Self (Edinburgh: Edinburgh University Press, 2000), pp. 61-92. [28] Portelli, ‘On the Uses of Memory’. p. 47. [29] King, Memory , pp. 33-118. [30] Paul Connerton, How Societies Remember , Themes in the Social Sciences (Cambridge: Cambridge University Press, 1989). pp. 41-71. [31] Maurice Halbwachs, La Mémoire Collective. [The Collective Memory] (Paris: Presses Universitaires de France., 1950).Translated by Lewis A. Coser. p. 42. [32] Jan Assmann, ‘Collective Memory and Cultural Identity’, New German Critique , Vol. 65, (1995), pp. 125–33. [33] Pierre Nora, ‘Between Memory and History: Les Lieux de Mémoire’, Representations , Special Issue: Memory and Counter-Memory (Spring, 1989), Vol. 26 (1989), pp. 7–24. [34] LaCapra, History and Memory after Auschwitz . p. 9 [35] Jan Assmann, ‘Collective Memory and Cultural Identity’; Judith Pollmann, Scripting the Self (Oxford: Oxford University Press, 2017). [36] Susan Sontag, Regarding the Pain of Others. , 1st edition. (New York: Farrar, Straus and Giroux, 2003). pp. 85-86 [37] Aleida Assmann, ‘Transformations between History and Memory’, Social Research , Vol. 75, No. 1 (2008), p. 52. [38] Kerwin Lee Klein, From History to Theory (Berkeley: University of California Press, 2011).
- The Russian Civil War and the Evolution of Soviet Terror
Anyone who claims that an historical event was formative must necessarily assume that things would have turned out differently if that event had not occurred. Counterfactual suppositions like these are, by definition, hard to prove. Yet, seemingly undeterred, historians have taken on the formidable challenge of determining whether the Russian Civil War steered the Bolsheviks onto a course that they otherwise would not have pursued. The specific matter with which this paper is concerned is the extent to which the experience of internal conflict was responsible for bringing about a shift towards policies and practices of terror. Few would deny that a defining characteristic of early Soviet rule was its sustained dependence on an arsenal of state-sponsored violence that culminated in the notorious purges of the 1930s. But there is considerable disagreement as to how this came about. Some say it was accidental; others say it was encoded in communist ideology. Either way, the key to understanding the origins of this repressive system lies in Hannah Arendt’s definition of terror as a form of government that ‘comes into being violence, having destroyed all power, does not abdicate, but, on the contrary, remains in full control.’ [1] That violence should become a permanent feature of life under the new regime had certainly not been the intention of the party’s leading ideologists. Nonetheless, contrary to their calculations, the emergence of a totalitarian regime predicated on terror was implicit in the very methods that propelled the Bolsheviks to a position of power in the first place. In this paper, I argue that the evolution of terror from an instrument of revolution to the cornerstone of Soviet power did not begin during the Russian Civil War but amidst the earlier round of revolutionary chaos that reached its peak between 1905 and 1907. The significance of the civil war lies primarily in the fact that it revived and accelerated a trend that had arisen over a decade before, when radical parties of all stripes resorted to indiscriminate acts of violence as a means of challenging and overturning the old order. As a starting point, I illustrate the need to examine Soviet terror as the product of an extended process in which ideas and circumstances interacted with one another in equal measure. I then trace its gradual development from the militant, conspiratorial spirit of the prerevolutionary underground through to the opening stages of internal conflict, when a confluence of factors drove the Bolsheviks to reanimate and intensify their earlier patterns of behaviour. Lastly, I suggest that the ferocious excesses of Stalinism were the likely, if not inevitable, outcome of this prolonged evolutionary process. The dominant modes of thought surrounding the Russian Civil War in traditional western historiography are inadequate, because they lead historians either to inflate or underplay its formative influence. Past interpretations were heavily informed by the work of the totalitarian school, which posited that most, if not all, developments in the history of the Soviet state can be explained in terms of the ideology that gave birth to it. [2] In this reading, Joseph Stalin was not a megalomaniac outlier responsible for perverting the intended course of the revolution, but a faithful executioner of the will of his predecessors. [3] If this is indeed the case, it follows that the civil war did little to alter the regime’s initial trajectory, and that the atrocities of the Stalin era were a natural and inevitable consequence of the realisation of Bolshevism. [4] Beginning in the 1970s, a new group of historians, known as revisionists, launched a concerted challenge to the basic assumptions of the totalitarian school. Previous studies, they charged, had vastly exaggerated the importance of ideology whilst paying insufficient attention to the specific social conditions that shaped Bolshevik policies and practices. [5] In their efforts to explain the true origins of Stalinism, revisionists pointed to the unique circumstances of the civil war years, declaring that communist terror was an accidental phenomenon that stemmed from the exigencies of defending the revolution against forces that tried to undermine it. [6] In a word, the emphasis shifted from ideas and intent to contingency and context. Revisionists had a point. It was commonplace for previous historians, guided by a fervent hostility to communism, to select evidence that confirmed their preordained conclusions and thus wilfully misinterpret the intentions of the Russian revolutionary movement. [7] Furthermore, prejudices aside, ideology should not be looked upon as a stable basis for collective action, since people’s behaviour is liable to change in accordance with new events and circumstances. [8] Ideas in themselves do not explain, for example, how the Bolsheviks had come to be so prepared for the use of violent, extra-legal measures by the time the civil war broke out. Lenin himself admitted that his party did not succeed on the back of ideology alone but capitalised on its extensive practical experience as part of the prerevolutionary underground, whose belligerent tactics readied them for the trials and tribulations that lay ahead. [9] Moreover, considering that much of the violence committed by the Reds over the course of the civil war happened without Lenin’s awareness, let alone consent, historians should be cautious about reading too much into what was said or not said by the party’s chief ideologists. [10] Nevertheless, revisionists went too far in the other direction. Some detractors have accused them of consciously disregarding the less romantic elements of the party’s agenda out of sympathy with the revolutionary cause. [11] Whatever their reasons, it was evidently not the case that terror was forced upon the Bolsheviks against their better judgement at a moment of weakness. Thirty years on from the fall of communism, with the benefit of hindsight and the availability of new material, it should now be possible for historians to evaluate the significance of ideology without bias in either direction. [12] Peter Holquist and James Ryan, among others, recommend that we view Soviet terror as part of a gradual evolutionary process, stretching from 1905 to 1922, in which ideas and circumstances collided to produce a climate of mass violence. [13] Historians have already made great strides in this vein: it is now increasingly common to situate the violence of the civil war in the context of broader sequences of events, like the First World War and the creation of modern state institutions. [14] This paper diverges from these other studies insofar as it places greater emphasis on the formative impact of the prerevolutionary underground. A century of ideological ferment had created fertile ground for the spread of political violence across much of Europe. [15] Indeed, during the latter half of the nineteenth century, national liberation movements, as well as radical groups seeking far-reaching political and socioeconomic change, began carrying out terrorist attacks in several European countries. [16] In so doing, they built on a long tradition of revolutionary violence that originated in France, where the armées revolutionnaires – the lawless enforcers of the new Jacobin regime – had rampaged through the countryside leaving fear and chaos in their wake. [17] Terror was calculated to serve as a symbolic weapon that would both subdue sworn opponents and intimidate the ambivalent into going along with the aims of the revolution. [18] But whereas elsewhere, progressive parties were split between support for the extreme tactics of the French tradition and the more peaceable vision of liberal humanitarianism, members of the Russian intelligentsia were unanimous in their endorsement of terror as a route to political transformation. [19] Questions of how to establish legal order were submerged in a strain of doctrinaire extremism that stressed the immediacy of getting rid of the ruling elite. [20] The guiding principles of the revolutionary movement in Russia owed a great deal to the theories of philosophers like Nechaev and Tkachev, who promoted terrorism as a means of accelerating the countdown to mass social conflagration and thereby compensating for the absence of suitable socioeconomic conditions. [21] The Bolsheviks, for their part, believed from the earliest years of their existence that terrorism was practically inseparable from revolution. [22] Terror undoubtedly had deep ideological roots in the Russian revolutionary movement. But the wave of violence that swept across the country in the late imperial period was at least as much due to specific circumstances. As a general rule, terrorism can be understood as an ‘expression of political impatience’. [23] The case of late nineteenth-century Russia was no exception. Legitimate feelings of frustration and helplessness among the intelligentsia had grown out of the tsars’ intransigence when confronted with the demands of modernity, as well as their own inability to bring about positive change through the performance of what they called small deeds, such as educating the peasantry and intervening to mitigate the effects of famine. [24] The lack of legal and institutional resources that might otherwise have given the country’s incipient political parties a stake in the existing system further compounded this sense of despair. [25] Naivety and inexperience played a part in the turn towards political extremism, but the real impetus was the belief that there were simply no other channels through which to bring about substantive change. Initial forays into terrorist activity were relatively restrained. People’s Will, for instance, made a point of targeting specific individuals with known connections to the imperial regime, and remained committed to the principle of ‘not one drop of superfluous blood’. [26] Over time, however, the government’s repressive and heavy-handed response to the spate of political assassinations tended only to drive their perpetrators to ever greater extremes. Amidst the spiralling chaos that accompanied a gradual breakdown in law and order, terrorism became increasingly indiscriminate as radicals lashed out against ‘depersonalised symbols of a hated reality’. [27] A turning point in the evolution of terror arrived in 1905, when mass upheaval, together with unrelenting acts of revolutionary violence, threatened the foundations of the tsarist regime. The level of anarchy in the main epicentres of the Russian Empire reached new heights as bombings, shootings, abductions, and armed robberies became a daily occurrence. [28] Between 1905 and 1907, over 9,000 people fell victim to terrorist attacks, out of whom approximately 4,500 were state officials. [29] Historians have tended to concentrate on the outsized role of the Socialist Revolutionary Party during this period. Indeed, received wisdom would have it that the Bolsheviks played only a marginal part in the proliferation of armed violence. It is certainly true that, in theory, Lenin condemned individual terrorist acts on the basis that they interfered with the essential role of organised, grassroots movements. [30] However, in the crisis of 1905 to 1907, most radical parties recognised that such disagreements could not come in the way of the collective crusade against the tsarist authorities, and consequently diverged from their ideological principles. [31] From February 1905, negotiations were underway for a potential amalgamation of SD and SR combat groups for the purpose of combined terrorist operations. The result was a convergence in tactics across former political diving lines: just as individual Bolshevik fighters would commonly carry out assassinations of police and military officials, so too did members of the Socialist Revolutionary Party swing towards the more coordinated, guerrilla-style methods of their SD counterparts. [32] Terrorism ultimately became a rite of passage for radical factions of all colours and was widely thought of as the only realistic and effective means of waging war on the old regime. [33] The increasingly belligerent and conspiratorial stance of the Bolshevik party can be observed in the decision of its Moscow committee to launch a premature uprising in December 1905 on the mistaken belief that it would garner the support of the working classes. [34] Prior to the rebellion, the committee’s combat branch issued a manual with detailed instructions on how to engage in guerrilla street fighting. [35] In the end, it was this reckless militancy that lay behind the split in the Social Democratic Party, with the Mensheviks favouring the gradual development of trade unions over the organisation of armed guerrilla groups. [36] The formative nature of this early period in the history of the Bolshevik party should not be underestimated. In conditions of modernity, where we are largely sheltered from the effects of mass violence, it can be difficult to grasp the sociopsychological frameworks that empowered the Bolsheviks and their contemporaries to kill with impunity. [37] To appreciate the profound impact of prerevolutionary terrorism on their later patterns of behaviour it has first to be accepted that political violence, as well having a strong appeal, leaves an indelible mark on the minds of those who employ it. Since it does not solicit our acquired capacity for reasoning so much as our innate and most elemental emotions, violence, especially when harnessed for political ends, can be intoxicating. [38] This is particularly true for young, socially marginalised individuals, like the raznochintsy of late imperial Russia, who feel that their lives lack agency, meaning, or excitement. [39] Violence is contagious, too. As Smith remarked, ‘once violence is at play, people have no choice but to use it or succumb to it.’ [40] That violent times produce generations of violent men is a truism; but in the case of the Bolsheviks, historians have come to differing conclusions as to which events were the most formative in shaping their apparent predilection for terror. The standard view is that the enormous scale of death and suffering unleashed across Europe after 1914 conditioned the Bolsheviks and their supporters to lose all sense of perspective and partake in unbridled acts of cruelty. Contemporary figures like Gorky and Berdiaev all highlighted the brutalising effects of total war on the Russian people. [41] However, this interpretation only holds if one accepts the dubious proposition that violence was driven from below. [42] It should have become evident from the preceding paragraphs that conditions prior to the revolution were not conducive to the development of an organic political system based on the power of the proletariat. Though popular vengeance against the country’s former elites was undoubtedly of great service to their cause, the conspiratorial tradition that pervaded the Bolshevik party’s structures precluded the possibility that the tools of state violence might fall into the hands of the people. As Peter Kenez has pointed out, the Bolsheviks had learned from their time in the prerevolutionary underground that, in times of chaos, ‘small groups of dedicated people can accomplish remarkable tasks’. [43] Besides the former tsarist officers who assumed leading roles in the administration of the Red Terror, most prominent Bolsheviks had had little to do with the First World War. [44] Indeed, a great many had been in prison or foreign exile for the duration of the conflict. Furthermore, much the same brutality was observable across the whole continent and yet, with the exception of Germany, it did not produce state terror on nearly the same scale as Russia. [45] There is also an argument that the civil war itself triggered these violent proclivities. After all, it was between the years 1918 and 1922 that the majority of those who commanded the Stalinist purges reached maturity. [46] However, not only did the small cadre of prerevolutionary members retain a disproportionate influence over policymaking, but it seems highly improbable that the Bolsheviks could have prevailed in the tough conditions of the civil war had they entered the stage without prior combat experience. [47] Therefore, the main responsibility for this culture of violence must be attributed to an earlier generation who ‘drew their ideas of what constituted acceptable civic behaviour from their experience under the Romanovs’. [48] Contrary to Fitzpatrick’s assertion that the ‘old Bolshevik leaders had not led violent lives’, a considerable proportion of officials in the party’s internal organs had, in fact, gained first-hand experience of violence through their participation in the prerevolutionary underground. [49] Dzerzhinsky and Latsis, who were instrumental in the orchestration of the Red Terror, had previously achieved notoriety as professional terrorists, while Stalin himself is said to have been strongly influenced by his years as an insurgent in the Caucasus. [50] The years 1905 to 1907 had several important effects. The strand of martyrdom and self-sacrifice that permeated the Bolshevik consciousness can almost certainly be traced back to the struggle against tsardom, whose many injustices rendered sacred all acts of rebellion in the eyes of its adversaries. At the same time, the radicals’ total absorption in preparing and executing terrorist attacks caused them to lose sight of such moral justifications and treat violence as an end in itself. [51] Perhaps the most important contribution of the prerevolutionary underground was to give the Bolsheviks the dangerous and beguiling impression that terror pays off. [52] The sacralisation of violence was further facilitated by the absence of a single force in Russian society that could have stood in the way of it. Not only did the bulk of the Russian population regard terrorists as freedom fighters, but even Russian liberals, who might otherwise have served as a bulwark against extremism, supported armed insurrection on the grounds that it undermined the tsarist authorities. [53] The heated debate between Leon Trotsky and Karl Kautsky over the legitimacy of revolutionary terror sheds light on the significance of this fact to the evolution of Bolshevism. Kautsky was a prominent representative of a broader movement of European socialists who opposed Bolshevik tactics. [54] In his 1919 pamphlet entitled Kommunizm i Terrorizm , he declared that there could be no justification for the use of terror, and roundly condemned the Bolsheviks for deviating from the true principles of Marxism. [55] Trotsky, who was at this point in charge of militarisation as Commissar of War, responded vituperatively. [56] He stated that ‘violent revolution has become a necessity precisely because the imminent requirements of history are helpless to find a road through the apparatus of parliamentary democracy’. [57] To a certain extent, the Bolsheviks were guided in this way of thinking by the alluring examples of the French Revolution and the Paris Commune of 1871. However, putting aside ideological factors, we must ask ourselves what specific conditions might have led the Bolsheviks to form such strong convictions regarding the need for terror. Kautsky provides a compelling answer when he writes, in the same tract, that the descent of the Soviet state into a regime of terror resulted from the fact that ‘under the absolutist regime all the elements who were striving upwards were denied all chance of insight, and still more all chance of participation in the administration of the State and the community.’ [58] To put it differently, in the absence of institutions like the rule of law and a vibrant civil society, the Bolsheviks could not have been expected to lead the revolution along a democratic, peaceful course. Ultimately, the first round of revolutionary chaos laid the groundworks for the elevation of terror to a system of government. Nevertheless, some kind of seismic force was still necessary to complete this process. The experience of civil war was crucial to the development of Soviet terror. In the immediate aftermath of the October revolution, state-sponsored violence was not yet a reality of life under the new regime. [59] Contemporary observers, like Bruce Lockhart, commented on the relative lenience with which the Bolsheviks treated their opponents, and there were even moves to abolish the death penalty. [60] Yet, by the summer of 1918, finding that their power hung in the balance, the Bolsheviks changed course dramatically. The strong and resilient counterrevolution presented a genuine danger to the stability of Bolshevik rule, and, at times, even seemed poised to supplant it. [61] The immediate trigger for the decision to launch the Red Terror was a string of successive uprisings, rebellions, and assassination attempts, including a near fatal one on Lenin outside a factory in Moscow. [62] It would be easy to conclude from this that the Bolsheviks’ turn towards terror was, as revisionists have argued, a pragmatic response to the existential challenges that befell them in the conflict’s early stages. However, while inauspicious circumstances may have necessitated the application of extreme measures, they do not account for the regime’s readiness to do so. The only reason the Bolsheviks were able to weather the storm and beat back their opponents was because they had already developed a violent approach to solving problems, and were, by conditioning, the side that was most adept at employing force. [63] Facing threats on all fronts, the Bolsheviks simply revived the coercive practices that they knew best. The Red Terror was, in essence, a magnified version of the terrorism that was carried out during the late imperial period. Its purpose was to frighten the civilian population into submission, destroy any and all opposition, and terrorise groups deemed undesirable by virtue of their social class. [64] The secret police even went so far as issuing daily lists of executed individuals. [65] In a similar vein, prerevolutionary radicals had set out to bring chaos to the streets as a sure way of asserting their pretensions to power. Methods like taking and executing hostages, razing whole villages, and ordering public executions were in much the same spirit as the tactics of earlier revolutionaries. Furthermore, contrary to claims that the terror of the civil war years was purely ‘instrumental’ in the sense that it targeted ‘known enemies’, there was, in fact, a strong tendency towards indiscriminate killing of the sort that took place between 1905 and 1907. [66] It is no coincidence that the earliest casualties of the campaign – 512 hostages in Petrograd and scores of others in Moscow – were all former members of the ruling elite, many of them tsarist ministers. [67] Whether or not exterminating them would hasten the success of the revolution was beside the point: from the Bolshevik perspective, they were faceless representatives of the same ‘hated reality’ they had struck out against in their formative years. [68] This ruthless mindset was also reflected in the workings of the Chekist organs, which were notorious for issuing extreme sentences in no way proportionate to the defendant’s alleged crime. [69] Moreover, the same propensity to sacralise violence as a moral good, so long as it was done in the name of the people, came to the fore again during the civil war, when Cheka officials took great pains to differentiate their own extrajudicial killings from those of the Whites. [70] Trotsky was well aware of these parallels, noting that ‘Red terror cannot, in principle, be distinguished from armed insurrection, the direct continuation of which it represents.’ [71] It is certainly true that a number of older Bolsheviks, including Mandelstam and Gorky, were appalled by the brutality of their comrades, and even worked with members of opposition parties to try and curtail the powers of the Cheka. [72] However, the conditions of the civil war were bound to guarantee the triumph of the majority that stood behind unbounded violence – not least because it appeared to be working. In the first days of the Red Terror, the army of the rival SRs retreated from Nizhniy Novgorod, and the Bolsheviks were at last able to capture Kazan from units of the Czech legion. [73] This comes back to the basic principle that political violence, as abhorrent as it may seem, tends to triumph over peaceful, institutional alternatives. [74] Over the course of the conflict, terror evolved into a system of rule underpinned by new tools and institutions. Revolutionary tribunals, though there is evidence of their usage as far back as the Moscow Uprising of 1905, were one such innovation that emerged out of the carnage of war. [75] Their role was eventually handed over to local Chekas, which were responsible for meting out summary justice in the form of mass arrests, hostage-taking, and executions. [76] Institutions like these provided terror with a new basis in state machinery. Crucial to the development of mass violence were modern state practices that had evolved out of a prolonged process by which European governments invented new ways of mobilising, controlling, and, at times, cleansing their populations. [77] For instance, the large-scale deportations that took place under the Bolsheviks were a continuation of the tsarist government’s own experimentation with such methods during the First World War, when it expelled hundreds of thousands of enemy aliens. [78] In order to isolate opponents of the new regime, the Cheka also set up a network of concentration camps that were closely modelled on those established by European empires in their overseas colonies. [79] All these tools and institutions were repurposed for a new utopian mission of societal transformation and applied with an even greater level of ferocity, causing considerable suffering in the process. [80] It is said of Stalin that, though personally desensitised to violence long before 1918, he picked up new practices during the civil war that he carried with him into his own reign of terror. [81] Complete dependence on methods and instruments of mass terror survived the civil war to become a permanent feature of the Soviet political system. By the end of the conflict the people in charge could barely conceive of a world where they did not have to impose their rule without recourse to the threat or use of violence. [82] Rosa Luxembourg, a staunch critic of Bolshevik policies, had warned at the time that even in ‘devilishly hard conditions…the danger begins when [revolutionaries] make a virtue of necessity and want to freeze into a complete theoretical system all the tactics forced upon them by these fatal circumstances.’ [83] To be sure, the conditions in which the Bolsheviks rose to power were hardly conducive to the creation, by non-violent means, of a democratic government committed to the primacy of justice over terror. Yet the regime’s founding fathers had not intended for terror to become a ‘complete theoretical system’. Rather, they had promoted it solely as a means to an end: that of wresting control and dominating their opponents. In fact, terror was supposed to eradicate the very sources of violence. [84] It was thought that, by transforming the imperialist war into a class war, and thereby deposing the war-mongering bourgeoisie, the path would be laid for eternal peace and harmony under the dictatorship of the proletariat. [85] From the works of Trotsky and Lenin it is clear that they both endorsed terror only under certain conditions, with the latter writing that it was a ‘legitimate weapon of the revolution at definite stages of its development’. [86] An analogous process unfolded in revolutionary France, where the regime of terror – contrary to the intentions of those who initiated it – led to violence being ‘exalted to a political system’ and ‘at times becoming an end in itself’. [87] Only a handful of Bolshevik thinkers, notably Kamenev and Bukharin, had had the sagacity to predict that, unless kept firmly in check, terror would at some point begin to consume the party itself and lead to unimaginable tragedy. [88] Way back in 1902, in his speech at the second congress of the RSDLP, an obscure figure by the name of Ivan Yegorov declared that ‘if, somewhere in the programme, a door is opened for terrorism, then it will inevitably begin to take priority over everything else in the programme.’ [89] His hypothesis turned out to be completely correct. At least in theory, revolutionaries are prone to seeing terror as necessary at first and then disavowing it once it is no longer exigent. [90] In France, the bloodthirsty excesses of Jacobinism eventually caused people to reject and stigmatise terrorism as a criminal act, where previously it had been regarded as a noble virtue. [91] In Russia, on the other hand, the ruling elite continued to sacralise violence until well after the end of the civil war. [92] Police officials who had formerly been at the centre of the Red Terror went on to become leading figures in government institutions, adapting their coercive methods for use in civilian administration. [93] While the NEP is often thought of as a relatively sober interlude in early Soviet history, political violence continued almost unabated in the country’s peripheries, where military operations were conducted as a matter of routine in the face of popular rebellions. [94] What was extraordinary about the Stalinist terror was that it happened at a time when war was out of the picture and resistance to the regime was minimal. [95] Yet the intensity of state violence, far from waning, only gained momentum. The purges were a textbook example of how a state founded on terror will, in the end, begin to ‘devour its own children’. [96] Indeed, having mercilessly vanquished all external opposition, government institutions began to turn on themselves and target alleged enemies within. [97] There is, of course, a distinction between terror and terrorism: the former pertains to states, whereas the latter is the domain of non-state actors. But history provides numerous examples of cases where terrorists, upon seizing control, proceed to construct states on the foundations of their murderous toolkit. Hamas is one example; the Taliban is another. Ultimately, the Bolsheviks drew on their extensive history of combat readiness to build a regime of terror. Therefore, to conclude, the Russian Civil War was not a formative experience in itself but completed a process by which terror evolved from a small-scale undertaking to a complete system of government buttressed by modern tools and institutions. Processes have starts and ends but they are not fixed. Terrorism, as it was practised by the Bolsheviks during their early years, did not predetermine the excesses of Stalinism. Rather, it provided a basis on which the party was able to conduct a fierce terroristic campaign under the pressures of total war. In other words, the experience of internal conflict was a necessary intermediate step that entrenched and intensified a much earlier trend towards using violence and coercion for political ends. The danger started when, in the turmoil of the early twentieth century, revolutionaries adopted extreme measures to destabilise a regime that they had come to despise for its repressiveness and incapacity for change. In so doing, they remained, as Naimark put it, ‘tragically blind to the dangers of justifying their objective by resorting to any means necessary.’ [98] Indeed, it is somewhat ironic that, in their efforts to eradicate the tyranny of the tsars, radical terrorists inadvertently paved the way for the tyranny of Stalin. Crucially, the conditions of the civil war allowed the Bolsheviks to experiment with new tools and practices, many of which were innovations that had appeared across Europe in response to the demands of modernisation. Unable to contemplate another way of conducting politics, they became prisoners of a fixed mindset that elevated terror to a position of omnipotence. Ally Allison is currently pursuing an MPhil in Russian and East European Studies at the University of Oxford (St. Anthony's College). Notes: [1] Hannah Arendt, On Violence (New York: Harvest Books, 1970), p. 55. [2] Evan Mawdsley, The Russian Civil War (Edinburgh: Birlinn Limited, 2000), p. 288. [3] Moshe Lewin, Lenin’s Last Struggle, 1st Ed. (Ann Arbor: University of Michigan Press, 2005), p. xvii. [4] Stephen F. Cohen, “Bolshevism and Stalinism”, in Robert C. Tucker (ed.) Stalinism: Essays in Historical Interpretation (New York: N.W. Norton & Co., 1998), p. 7. [5] Sheila Fitzpatrick, “The Civil War as a Formative Experience”, in Bolshevik Culture: Experiment and Order in the Russian Revolution., eds. by Abbott Gleason, Peter Kenez and Richard Stites (Bloomington: Indiana University Press, 1985), p. 57. [6] D. Raleigh, “The Russian civil war, 1917–1922.” in Cambridge History of Russia. , ed. by R. Suny (Cambridge: Cambridge University Press, 2006), p. 140. [7] Peter Holquist, “The Russian Revolution as Continuum and Context and Yes - as Revolution: Reflections on Recent Anglophone Scholarship of the Russian Revolution”, Cahiers du Monde Russe (2017) 58/1-2, p. 80. [8] Michael Addison, Violent Politics: Strategies of Internal Conflict (Basingstoke: Palgrave, 2002), p. 16. [9] V.I. Lenin, “Left-Wing” Communism: an Infantile Disorder. , trans. by Julius Katzer (Moscow: Progress Publishers, 1964), p. 26. [10] James Ryan, Lenin's Terror: The Ideological Origins of Early Soviet State Violence (London: Taylor & Francis, 2012), p. 3. [11] Vladimir Brovkin, Behind the Front Lines of the Civil War: Political Parties and Social Movements in Russia, 1918-1922 (Princeton: Princeton University Press, 1994) [12] S.A. Smith, “Writing the History of the Russian Revolution after the Fall of Communism,” Europe-Asia Studies 46, no. 4 (1994), p. 567. [13] James Ryan, “The Sacralization of Violence: Bolshevik Justifications for Violence and Terror during the Civil War”, Slavic Review 74, no. 4 (2015), p. 809. Peter Holquist, “Violent Russia, Deadly Marxism? Russia in the Epoch of Violence, 1905-21”, Kritika 4, no. 3 (2003), p. 628. [14] See, for example, Peter Holquist, Making War, Forging Revolution: Russia's Continuum of Crisis, 1914-1921 (Cambridge, Mass.: Harvard University Press, 2002). [15] D. Raleigh, Experiencing Russia’s Civil War: Politics, Society, and Revolutionary Culture in Saratov, 1917-22 (Princeton: Princeton University Press, 2002), p. 410. [16] Susan K. Morrissey, “Terrorism and Ressentiment in Revolutionary Russia”, Past & Present , Vol. 246, Issue 1 (Feb 2020), p. 191. [17] Richard Cobb, The People’s Armies. (New Haven: Yale University Press, 1987), p. 4. [18] Ibid. [19] E.H. Carr, The Bolshevik Revolution, 1917-1923., Volume 1 (London: Macmillan Press, 1950), p. 155. [20] Boris Elkin “The Russian Intelligentsia on the Eve of the Revolution”, in The Russian Intelligentsia. , ed. by Richard Pipes (New York: Columbia University Press, 1961), p. 31. [21] Claudia Verhoeven, “Time of Terror, Terror of Time On the Impatience of Russian Revolutionary Terrorism (Early 1860s – Early 1880s)”, Jahrbücher Für Geschichte Osteuropas 58, no. 2 (2010), p. 263. [22] Karl Kautsky, Terrorism and Communism: A Contribution to the Natural History of Revolution , trans. by W.H. Kerridge (Berlin: The National Labour Press Ltd., 1919), p. 4. [23] Verhoeven, “Time of Terror”, p. 254. [24] Robert Service, The Bolshevik Party in Revolution: A Study in Organisational Change, 1917-1923 (London: Macmillan, 1979), p. 4. [25] Holquist, “Violent Russia”, p. 632. [26] Bruce Hoffman, Inside Terrorism. (Victor Gollancz, 1998), p. 6. [27] Anna Geifman, Thou Shalt Kill: Revolutionary Terrorism in Russia, 1894-1917 (Princeton: Princeton University Press, 1993), p. 250. [28] Anna Geifman, Death Orders: The Vanguard of Modern Terrorism (Santa Barbara: Praeger Security International, 2010), p. 14. [29] Geifman, Thou Shalt Kill , p. 21. [30] E. Stepanova, “Terrorism in the Russian Empire: The Late Nineteenth and Early Twentieth Centuries” in R. English (ed.), The Cambridge History of Terrorism (Cambridge: Cambridge University Press, 2021), p. 311. [31] Geifman, Thou Shalt Kill , p. 188. [32] Ryan, Lenin’s Terror, p. 39. [33] Stepanova, “Terrorism in the Russian Empire”, p. 311. [34] Abraham Ascher, The Revolution of 1905 : Russia in Disarray , vol.1 (Stanford: Stanford University Press, 1981), p. 308. [35] Ibid., p. 310. [36] Elkin, “The Russian Intelligentsia”, p. 42. [37] S. A. Smith, “The Historiography of the Russian Revolution 100 Years On”, Kritika: Explorations in Russian and Eurasian History 16, no. 4 (2015), p. 749. [38] Addison, Violent Politics , p. 58. [39] Raleigh, Experiencing Russia’s Civil War, pp. 127-8. [40] Smith, “The Historiography”, p. 739. [41] E.G. Gimpel’son, Formirovanie sovetskoi politicheskoi sistemii. 1917-1923 gg (Moskva: Nauka, 1995), p. 131. [42] See, for example, Orlando Figes, A People's Tragedy (London: Pimlico, 1996). And Tsuyoshi Hasegawa, “Crime, Police, and Mob Justice in Petrograd During the Russian Revolutions of 1917”, in Revolutionary Russia: New Approaches to the Russian Revolution of 1917. , ed. by Rex A. Wade (London: Taylor & Francis Group, 2004). [43] Peter Kenez, Red Advance, White Defeat: Civil War in South Russia, 1919-1920 (Washington D.C.: New Academia Publishing, 2004), p. 11. [44] David L. Hoffmann, Cultivating the Masses: Modern State Practices and Soviet Socialism, 1914-1939 (Ithaca: Cornell University Press, 2016), p. 258. [45] Ibid., p. 242. [46] Moshe Lewin, The Making of the Soviet System: Essays in the Social History of Interwar Russia (London: Methuen, 1985), p. 23. [47] Kenez, Red Advance, p. 14 [48] Raleigh, Experiencing Russia’s Civil War , p. 108. [49] Fitzpatrick, “The Civil War”, p. 66. Jörg Baberowski, Scorched Earth (New Haven: Yale University Press, 2017), p. 16. [50] Anna Geifman, “The Origins of Soviet State Terrorism, 1917-21.” in Times of Trouble: Violence in Russian Literature and Culture. ed. by Marcus C. Levitt and Tatyana Novikov (Madison: The University of Wisconsin Press, 2007), p. 158. [51] Geifman, Thou Shalt Kill , p. 250. [52] Adam Ulam, The Bolsheviks: The Intellectual, Personal and Political History of the Triumph of Communism in Russia (New York: Collier Books, 1965) , p. 422. [53] Holquist, “Violent Russia”, p. 632. [54] Ryan, Lenin’s Terror, p. 3. [55] Kautsky, Terrorism and Communism, p. 105. [56] The censure of so prominent a figure as Kautsky was of acute concern to the Bolsheviks, as they perceived that revolution in Russia could not succeed unless it managed to inspire revolution elsewhere in Europe. [57] Leon Trotsky , Terrorism and Communism: A Reply to Karl Kautsky (Ann Arbor: The University of Michigan Press , 1961), p. 36. [58] Kautsky, Terrorism and Communism , p. 95. [59] Gimpel’son, Formirovanie, p. 126. [60] Carr, The Russian Revolution , p. 153. [61] Arno J. Mayer, The Furies: Violence and Terror in the French and Russian Revolutions (Princeton: Princeton University Press, 2013), p. 4. [62] Scott B. Smith, Captives of Revolution: The Socialist Revolutionaries and the Bolshevik Dictatorship, 1918-1923 (Pittsburgh: University of Pittsburgh Press, 2011), p. 81. [63] Baberowski, Scorched Earth , p. 38. [64] Hoffmann, Cultivating the Masses , p. 260. [65] Ryan, “The Sacralization of Violence”, p. 809. [66] Hannah Arendt refers to instrumental violence in connection with the 1917 revolution in On Violence, p. 49. Mayer, The Furies , p. 50. [67] Sobranie uzakoneniii i rasporiazheniii pravitel'stva za 1917-1918 gg. Upravlenie delami Sovnarkoma SSSR M. 1942, no.65 St.710, p. 883. [68] Richard Pipes, Russia under the Bolshevik Regime (New York: A.A. Knopf, 1993), p. 500. [69] Gimpel’son, Formirovanie, p. 130. [70] Mark D. Steinberg, The Russian Revolution 1905-1921 (Oxford: Oxford University Press, 2016), p. 322. [71] Trotsky, Terrorism and Communism , p. 58. [72] Brovkin, Behind the Front Lines , pp. 46-47. [73] Ibid., p. 21. [74] Addison, Violent Politics , p. 4. [75] Ascher, The Revolution of 1905, p. 319. [76] Sheila Fitzpatrick, The Russian Revolution (Oxford, 2008), p. 76. [77] Yanni Kostonis, “Introduction: A Modern Paradox - Subject and Citizen in Nineteenth- and Twentieth-Century Russia”, in Russian modernity: politics, knowledge, practices. , ed. by David Hoffman (Basingstoke: Macmillan Press Ltd., 2000), pp. 6-8. [78] Hoffmann, Cultivating the Masses, pp. 255-256. [79] Ibid., p. 240. [80] Peter Holquist, “Revolutionary State Practices and Politics”, in Russian modernity: politics, knowledge, practices. , ed. by David Hoffman (Basingstoke: Macmillan Press Ltd., 2000), p. 91. [81] Roger Pethybridge, The Social Prelude to Stalinism. (London: Macmillan, 1974), p. 123. [82] Baberowski, Scorched Earth, p. 3. [83] Rosa Luxembourg, Rosa Luxembourg Speaks. , ed. by Mary-Alice Waters (New York: Pathfinder Press, 1970), pp. 394-95. [84] Steinberg, The Russian Revolution , p. 322. [85] V.I. Lenin, “The War and Russian Social Democracy”, in Collected Works (Moscow: Progress Publishers, 1964). [86] V.I. Lenin, Polnoe Sobranie Sochinenii, iz. 5, tom. 37 (March 1919-July 1919) , p. 89. [87] Cobb, People’s Armies , p. 2. [88] Ryan, “The Sacralization of Violence”, p. 815. [89] “Sixth Congress”, in 1903: Second Congress of the Russian Social-democratic Labour Party. , trans. by Brian Pearce (London: New Park Publications, 1978). [90] Fitzpatrick, The Russian Revolution, p. 12. [91] Hoffman, Inside Terrorism, p. 4. [92] Orlando Figes, Revolutionary Russia, 1891-1991 (London: Pelican, 2014), p. 167. [93] Robert C. Tucker, “Stalinism as Revolution from Above”, in Stalinism: Essays in Historical Interpretation ., ed. by Robert C. Tucker (New York: N.W. Norton & Co., 1998), p. 92. [94] Hoffmann, Cultivating the Masses , p. 264. [95] Mayer, The Furies , p. 14. [96] Arendt, On Violence, p. 55 [97] Fitzpatrick, The Russian Revolution , p. 12. [98] Norman M. Naimark, “Terrorism and the Fall of Imperial Russia”, Terrorism and political violence 2 , no. 2 (1990), p. 189.
- How helpful is Foucault's history of sexuality for understanding homoeroticism in Classical Greece?
Foucault’s volumes on the history of sexuality have been immensely influential in modern understandings of sexuality in the ancient world. Greek ideas of homosexuality continue to be at the core of several modern legal and moral debates on the rights of sexual minorities, as evidenced by Romer vs Evans (1996).[1] Foucault’s argument, clearly influenced by French Existentialism, that sexuality did not itself exist until discourse on sexualities appeared in the 18th and 19th centuries challenges the idea of sexual identities as essential and natural diachronically. Instead of a Classical ‘homosexuality’, Foucault posits the idea of the desiring subject and a concept of homoerotic behaviour based on pederasty and penetration. However, Foucault’s work presents a model of Greek homoeroticism that is much more limited than our surviving sources allow for. His understanding of pederasty is too narrow and his focus on penetration too emphatic. Indeed, Foucault’s promotion of the so-called ‘penetration model’ fails to encompass other important elements such as the legal, political, and social cultural norms in which homoeroticism occurred. First, Foucault argues that a Classical Greek had no concept of sexuality. A man who had sex with men would not ‘feel homosexual’.[2] Foucault’s position was thus one of constructionism, arguing that same-sex desire in Classical Greece was so unlike that in the modern age that it is not part of the same historical continuum. Nonetheless, when faced with the need to explain this absence of an essentialist history of homosexuality, Foucault was confronted with an abundance of evidence of male homoeroticism from Classical Athens. He thus posits sexual identity in the ancient world as the expression of desire by a (always male) subject. This desiring subject is exemplified for Foucault by the erastes, a Greek adult man who would engage with a younger male, the eromenos, in pederasty, a relationship that may have educational, romantic, sexual, and mutually beneficial elements. Foucault believed that the eromenos was perceived as the passive subject, which was problematised by the Athenians as they did not wish a passive subject who had been dominated to grow to be an active participant in the city, evidenced by Foucault’s interpretation of Aeschines 1. Foucault argued that there “was a reluctance to evoke directly and in so many words the role of the boy in sexual intercourse”.[3] Pederastic relationships thus had expected behaviours for each participant including the desiring subject and the boy, the object, who could not actively identify with his part and so was meant to refuse, resist or flee (Plato, Symposium 184a).[4] Foucault’s interpretation of Classical Greek homoeroticism is therefore one of binaries: active/passive; subject/object. It is thus an inherently narrow view and too accepting of objectification of living beings. A red-figure kylix c.500 BCE by the artist ‘Peithinos’ (Persuasion/ Seducer) found in Vulci, Etruria (now in Berlin) might show a more multifaceted view of Classical eroticism. It includes heterosexual courtship, which Foucault altogether ignores, and shows cadets engaged in various levels of erotic pursuits with male striplings in various levels of entrapment.[5] The cup itself contained several desiring subjects but may also have been used to invoke desire at a symposium. Foucault’s interest and emphasis on power dynamics may have blinded him also to how those experiencing ‘passive’ roles could also have considered themselves as desiring subjects and enjoyed sexual pleasure in their roles. There appears to be more ancient evidence for an essential form of sexual identity than Foucault accepts. In the speech of the comic Aristophanes in Plato’s Symposium (189c-193e), the speaker recounts a myth of human creation that explains human desire in terms that would define sexuality as natural and essentialist. Plato’s Aristophanes recounts how the whole human being was once “round in form, with its back and sides in a circle, with four arms, an equal number of legs, and two faces” (189e). These creatures were male, female or androgyne. Zeus decided to split them in half, allowing them sexual desire so that they would not die out altogether (191c6-8). Those who were halves of male or female wholes seek partners of the same sex, while those of androgyne origin seek partners of the other sex (191d3-192a2). A myth which seemingly supports the timeless existence of heterosexuality, bisexuality and homosexuality, this story has been seen rather as a comic satire, appropriately voiced by that acclaimed comedian Aristophanes. Seemingly ignoring the passage’s apparent essentialism, Foucault focuses on the question of consent, claiming “Aristophanes gives an answer that is direct, simple, and entirely affirmative, and he thereby abolishes the game of dissymmetries that structured the complex relations between man and boy.”[6] Foucault however fails to see Aristophanes’ mythos as an elaborate joke and as part of Plato’s ordering of popular understandings of sex and gender.[7]And yet even if Plato is playing on this view and satirising it, does it not indicate that it was at least comprehensible and potentially prevalent in contemporary Athenian society? Moreover, Foucault asserts that central to Greek homoeroticism was a discourse of domination and penetration, which he labelled “quite disgusting.”[8] He argued that “sexual relations – always conceived in terms of the model act of penetration, assuming a polarity that opposed activity and passivity – were seen as being of the same type as the relationship between a superior and a subordinate, an individual who dominates and one who is dominated.”[9] However, Foucault’s support of the ‘penetration model’ has been challenged.[10] Foucault had understood the Athenian law against he who had prostituted himself from holding a magistracy, becoming a herald, prosecutor, or slanderer as an Athenian distaste for being ruled over by any man who had been anally penetrated, even if he was “the most eloquent orator in Athens” (Aesch. 1.19-20). Timarchus, alleged Aeschines in 346/5 BCE, was such a disgraceful man. However, it was not relevant to the citizens of Athens whether Timarchus had been penetrated by another man. What Aeschines is at pains to stress here is that Timarchus is notoriously - “a man not unknown to you” (Aesch. 1.43) - a man of sexual excess, gluttony, and corruptibility and so totally unsuited to being active in political life in any way. Ancient Greeks viewed character as immutable and consistent. They could easily have believed that one who could sell his body for money might also be tempted by bribery and avarice to sacrifice the interests of the state. The Athenian jurors may have also been influenced by political motivations in condemning Timarchus, such as a knowledge of his opposition to the Peace of Philocrates. Nevertheless, there may have been some who found Timarchus’s sexual history highly distasteful and even offensive. And yet, that Timarchus was penetrated is not the charge Aeschines levies at him. Indeed, Aeschines makes no distinction between sodomy and intercrural sex and gives no details of sexual acts,[11] problematising, then, Timarchus’s character rather than any given sex acts or ‘sexuality’. That the Classical Greeks problematised penetration less than Foucault suggests is further demonstrated by Lysias’ speech Against Simon, in which the speaker defends himself against the charge of premeditated murder towards Simon. The speaker notes that he and Simon “were both attracted, members of the Council, to Theodotus… I expected to win him over by treating him well, but Simon thought that by behaving arrogantly and lawlessly he would force him to do what he wanted.” (Lys. 3.5). Thus, what is at issue is not that Theodotus, who is described positively even if he is probably an enslaved person as indicated by the mention that he would have to be tortured to testify (Lys. 3.33), would have been penetrated by either the speaker or by Lysias. The problem is the alleged transgressions of Simon of the legal and cultural norms in Athens in stealing Theodotus (the speaker’s property) to court him improperly. Neither Aeschines nor Lysias problematise penetration; this is the preoccupation of the Foucauldian, not the Classical Greek. There were indeed problematic ideas and conceptions relating to homoeroticism, but these should be viewed within the cultural confines of Classical Athens. For example, Aristophanes demonstrates how it is not being ‘passive’ in sex that is generally a problem for Athenians, but a fear that one is corruptible and might take money in exchange for sex. In referring to “yawning-arsed Ionians”, Aristophanes is noting and condemning how easily they are corrupted by Persian gold (Ar. Arch. 106-7).[12] The centrality of penetration in Greek homoeroticism must therefore be seriously questioned, especially when anal penetration is rarely mentioned in the literature and never depicted on Classical vases. In addition, Foucault’s categories of ‘subject’ and ‘object’ and their associated cultural roles and meanings are too regimented. Poster acknowledged that “Foucault assumes that a sexual relation in which one partner is required exclusively to play an active role and the other partner exclusively to play a passive role is possible, as if the fact of “activity” and “passivity” were not ambiguous from the start.”[13] Not all homoerotic relationships within Athens fit into a strict Foucauldian model of pederasty and penetration. For example, the notoriously beautiful but traitorous Alcibiades noisily gate-crashes into Plato’s Symposium (212) and proceeds to claim that it was he who had tried unsuccessfully to seduce Socrates. This therefore effectively inverts the roles of the erastes and the eromenos and of who pursues whom. Socrates’ moderation and self-control is also demonstrated here (217aff). However, Plato sets his famous text precisely in 416 BCE, as the symposiasts are celebrating the first victory of Agathon at the Leneia. Alcibiades is about 34 by this time, certainly a full-grown man. This instance, even if exaggerated to show Socrates’ self-restraint, suggests that the model of pederasty was not as rigid as Foucault would argue. Additionally, homoeroticism was no more monolithic outside of Athens. Cartledge argues that Spartan pederasty was institutionalised especially within the agoge system. Plutarch records that “erastai began to frequent the company of those of the reputable boys who had reached” twelve (Plu. Lyc. 17). In Sparta some elements of homoeroticism such as the cult of the nude male body seem to have been taken to further extremes, particularly in the gymnasium. Thucydides (1.6.5) notes, for instance, that it was the Spartans who created the custom of exercising fully nude and rubbing down with oil.[14] On the contrary, Link has argued that pederasty was publicly institutionalised in Crete, but certainly not in Spartan education.[15] In Sparta, it appears that to be cast in the passive role was not as problematised as Foucault would argue it was in Athens. Indeed, pederasty could have acted in Sparta as a means of recruiting the political elite,[16] which would not have had the same context in democratic Athens. For example, Xenophon tells us that when the Spartiate Sphodrias was arraigned on a capital charge, he was acquitted by king Agesilaos (Hell. 5.4.20-33). The sons of the accused and the king were involved in a pederastic relationship. Spartan pederasty could therefore also be politicised in a very different context than in Athens. Foucault’s model, it would seem, fails to allow for such variation. Thus, in his attempt to define sexuality and homosexuality as not universalizable, a constructionalist, existentialist phenomenon created out of discourse, Foucault ends up with a model of homoeroticism that is too narrow and monolithic. He fails to account for significant geographical and temporal disparities in treating Classical Greece as a single moment in history that expresses one behavioural model. His work does not give scholars the tools to understand the complexities and varieties of desire and human relationships in Classical Greece or the modern world. Jessica Hoar is currently in her 3rd year of a BA in Classical Archaeology and Ancient History at the University of Oxford (Lincoln College) Notes: [1] J. Davidson, ‘Dover, Foucault and Greek homosexuality’, Past and Present, Vol. 170 (2001), p. 5. [2] Ibid., p. 35. [3] M. Foucault, The History of Sexuality. Vol. 2: The Use of Pleasure (New York, 1984), pp. 223-4. [4] Ibid., p. 224. [5] J. Davidson, The Greeks & Greek Love: A Radical Reappraisal of Homosexuality in Ancient Greece (London, 2007), pp. 428-436. [6] Foucault, Sexuality, p. 233. [7] J.S. Carnes, ‘This myth which is not one: construction of discourse in Plato’s Symposium’ in D.H. J. Larmour (eds.), Rethinking Sexuality: Foucault and Classical Antiquity (Princeton, 1998), pp. 106-7. [8] P. Rabinow, The Foucault Reader (Harmondsworth, 1986), p. 346. [9] Foucault, Sexuality, p. 215. [10] See, for example: Davidson, The Greeks & Greek Love. [11] Davidson, The Greeks & Greek Love, p. 19. [12] Ibid., p. 21. [13] Poster, Foucault, p. 213. [14] P. Cartledge, ‘The politics of Spartan pederasty’, Proceedings of the Cambridge Philological Society, No. 27 (1981), p. 27. [15] S. Link, ‘Education and pederasty in Spartan and Cretan society’ in S. Hodkinson (ed.), Sparta: Comparative Approaches (Swansea, 2009) p. 92. [16] Cartledge, ‘Spartan pederasty’, p. 28.
- To what extent can Stalin’s policy of industrialisation be considered a success?
The implementation of Stalinist industrialisation, between 1928 and 1941, transformed the Soviet economy into a modern economic powerhouse, enabling victory over Nazi Germany[1] and contributed to the emergence of the Soviet Union as a superpower in the ensuant Cold War.[2] Notwithstanding, this paper argues that any objective industrial successes are marred, and therefore limited, by the malevolence and inefficacy of Stalin’s agricultural policy. In essence, the extent to which Stalin’s policy of industrialization can be considered a success is severely constrained by the failures of collectivisation and dekulakisation, and the subsequent famine of 1932-33. Firstly, this paper will provide an economic assessment of Stalinist industrialisation [3], vis-a-vis its contextual motivations, to demonstrate the objective successes of industrialisation—that of transforming Soviet Russia from an agrarian to an industrial economy—through quantitative analysis of economic data. However, an evaluation of the human cost and inefficacy of Stalin’s agricultural policy will evidence the limitations to the extent to which Stalinist industrialisation can be considered a success. In doing so, the argument that any successes of Soviet industrialisation are inhibited by the impotence and malice of collectivisation and the resultant famine is evinced. In 1928, Stalin broke away from Lenin’s New Economic Policy (NEP) with the ‘Great Turn’, to both transform, modernise, and industrialise the Soviet economy, and to consolidate his power within the Politburo against that of Nikolai Bukharin and the ‘Right’ Bolsheviks.[4] Stalin viewed his industrialisation policy as a “decisive advance” in “leaving behind Russian [socioeconomic] backwardness”.[5] Pertinently, Stalin’s notion of ‘socialism in one country’ assumed the possibility of war within a capitalist world to be exponentially high (to the point of inevitability), and thus great emphasis was placed on the need to develop military industry in preparation for such conflict.[6] In essence, Soviet Russia was ill-equipped to defend itself so long as it operated an agrarian, ‘backwards’ economy. The resolution of Stalin’s first five-year plan, approved by the 15th Congress of the Bolsheviks in 1927, best surmises this fundamental motivation of industrialisation: “In view of a possible military attack by capitalist states against the proletarian state, the Five-Year Plan should devote maximum attention to the fastest possible development of those sectors of the economy...which play the main role in securing the country’s defence and in providing economic stability in war time”.[7] A quantitative analysis of the Soviet economy between 1928 and 1941, vis-a-vis the economic objective of Stalinist industrialisation, is now requisite to demonstrate its overarching success. This paper holds the urbanisation of the Soviet population, the increase in non-agricultural employment, and the increase in industrial production and investment, to be key indicators of the successfulness of Stalinist industrialisation. As argued by Wheatcroft et al., “the pace of Soviet industrialisation was strikingly reflected in the rate of urbanisation”.[8] The urban population in the Soviet Union increased from an estimated 26.3 million to 55.9 million between 1926 and 1939, according to demographer Frank Lorimer, an increase from 17.9 to 32.8 percent of the total population.[9]The effects of industrialisation and urbanisation can also be evidenced through Soviet employment rates between 1926 and 1939. In 1926, total non-agricultural employment was 11.6 million, 6.4 million of which was in industry, construction, and infrastructure.[10] By 1939, over 39.3 million were in non-agricultural employment, an increase of 239 percent in just thirteen years, with 23.7 million employed in industry, construction, and infrastructure.[11] Simultaneously, agricultural employment rapidly decreased. In 1926, total Soviet agricultural employment stood at 71.7 million; by 1939, this figure had declined to 47.7 million.[12] On the surface, this alone does not demonstrate the transformation of Soviet Russia from an agrarian to an industrial society – by 1939, agriculture remained the most employed sector by some 8.4 million. However, the very essence of such agricultural employment had been dramatically centralised and industrialised through collectivisation. Wheatcroft et al. posit that “the nature of agricultural employment was transformed with the replacement of 20 million individual peasant family households by primarily collective employment on 4,000 [sovkhozes] and over 200,000 [kolkhozes]”. Collective farming benefited substantially from new farm machinery, such as tractors[13], consequential of the increased industrial investment and output enjoyed during Stalin’s initial five-year plans.[14] The expeditious increase of the industrial labour force was assisted by an “astonishing expansion in industrial investment”[15], and thus output. Gross investment increased from 8.4 percent of gross national product (GNP) in 1928 to 21.1 percent in 1937.[16] Such an increase in industrial investment, as a proportion of GNP, was greater than that of the United States and other industrialised nations.[17] The increase in investment translated into impressive growth of capital stock. Western estimates in 1941 identified that the ”net fixed capital stock in [non-agricultural] sectors had reached 653 percent” of the 1928 level.[18] Rises in capital stock enabled fixed production capital to rise 411 percent between 1928 and 1935.[19] Increased industrial investment, output, and capital stock resulted not only in pronounced real GNP growth but also facilitated a significant absorption of resources for military proliferation in preparation for war.[20] For example, armaments employment in 1930 had doubled compared to 1913[21]; by 1932, this figure was fourfold the 1913 level.[22] Investment in the armaments industry by 1931 was 113 percent higher than in 1930.[23] Most tellingly, national economy allocations to armaments increased from 76 million rubles in 1928 to 803 million rubles in 1933.[24] Such a drastic increase in available funds to allocate to armaments and defence preparation demonstrates the successfulness of industrialisation in fulfilling the need to develop those sectors which ”play the main role in securing the country’s defence”.[25] Quantitatively, the objective success of industrialisation in dramatically increasing industrial investment, output, and the urbanisation of the Soviet labour force, to transform away from an agrarian economy and militarise, has been evidenced. However, as an analysis of Stalin’s agricultural policy between 1928 and 1941 will evince, the notion that Stalinist industrialisation was “an enormous achievement”[26] is marred by waste, inefficacy, and malicious repression. Whilst industrial developments provided successes for Stalin, agriculture “was dominated by crisis and disaster”.[27] During industrialisation, total agricultural production declined significantly, as did the standard of living for the Soviet peasantry. The effects of collectivisation [28] and dekulakisation best evidence the limitations to the extent in which Stalinist industrialisation can be considered a success. Collectivisation commenced in 1929, following Stalin’s ‘Great Turn’ speech, and is argued to be “overwhelmingly...erratic”.[29] For Stalin, collectivisation was deemed essential to confiscate the “agricultural surplus to subsidise industrialisation” and urbanise the labour force.[30] The Soviet elite forced the peasantry, through ’price scissors‘, to sell its agricultural output to the state at below-market prices, so that the state was then able to sell the grain to industrial workers at higher prices and to export grain to fund imports of industrial capital. However, such exacted state procurement of grain precipitated an “unmitigated economic disaster”[31], having the unintended consequence of destroying the agricultural surplus and declining agricultural output. Alec Nove argues that the policy of collectivisation 'demoralised the peasantry and rendered collective farming inefficient’.[32] Such an assessment is corroborated by the key factors that negatively affected agricultural production, identified by Cheremukhin et al. Firstly, the ”state extraction of grain” impeded agricultural production twofold[33] – by demoralising the peasantry, collective farming engendered a dramatic fall in livestock due to a lack of grain fodder. Moshe Lewin identifies this demoralisation is, in part, attributable to the decreased living standards experienced by the peasants, who no longer possessed autonomy on their farms but rather inhabited zemlianki – makeshift huts dug into the ground.[34][i]Ultimately, the dekulakisation campaign of 1929-1933 saw over 5 million peasants exiled or executed,[35] in Stalin’s malevolent attempt to demoralise the peasantry to the point of being incapable of resisting collectivisation. Demoralising the peasantry and requisitioning grain led to a significant decline in levels of technical ability, livestock levels, and agricultural output, for the kulaks represented the most successful and productive of the peasantry. For example, in 1931, agricultural production was 27 percent lower than the peak of 1928, and 18 percent below the prerevolutionary average.[36] Furthermore, much livestock was slaughtered by peasants upon joining the kolkhoz and decreased levels of grain due to requisitioning produced fodder shortages.[37] Thus, in 1933 there were 33 percent less sheep, 50 percent less horses, and 54 percent less cattle than in 1928.[38] As such, over a quarter of all Soviet agricultural capital was destroyed by ineffective agricultural policies.[39] This serves to evidence the inefficacy, futility, and ignorance of Stalinist agricultural policy. The fundamental limitation to the success of Stalinist industrialisation is the famine of 1932-33[40] – a direct consequence of collectivisation and dekulakisation. The famine killed an estimated seven million people.[41]Historians have argued that Stalin ”was certainly more concerned with the fate of industrialisation than the lives of the peasantry” to the point of believing the famine was self-inflicted by the peasants.[42] The inverse is true. Forced collectivisation and dekulakisation severely disrupted agricultural productivity, as aforementioned, laying the foundations for the famine to emerge. However, the state’s subsequent grain requisition[43], coupled with grain exports to fund industrialisation, intentionally exacerbated the famine. Ellman argues that the 1.8 million tonnes of grain exported between 1932-33 would have been enough to sustain over 5 million people for one year.[44] Moreover, Stalin’s dekulakisation waged war on the autonomous peasantry, who he held to be either ’class enemies’, ’idlers’, or ’thieves’.[45] In February 1933, Stalin, echoing Lenin, declared ”he who does not work, neither shall he eat”.[46] In essence, those peasants not farming collectively were anti-Soviet and, thus, needed eradicating.[47] Ellman holds this notion of ’starvation as policy’ to remove anti-Soviet elements, implicitly or otherwise, to be the official Soviet position during the famine. There is historiographical consensus that when Stalin conceptualised collectivisation, the policy did not include an intent to exact a starvation policy on the peasantry to remove anti-Soviet elements.[48] Intentional or not, Stalin‘s exaction of a ’starvation policy’ during the famine to eliminate anti-Soviet elements and continue with the industrialisation drive, evinces his malevolence, greatly limiting the aforementioned successes of industrialisation. Under dekulakisation, over 1.8 million peasants were deported between 1929-1933 to Kazakhstan and West Siberia.[49] The cost of such deportations was estimated to be 1.4 billion rubles.[50] Thus, starvation became the most attractive alternative to deportation. Stalin’s rejection of foreign support[51], coupled with the exacerbatory acts of requisitioning and exporting grain to lessen the impact of the famine, evinces his malevolent commitment to a ’starvation policy’ to enforce dekulakisation. As Ellman identified, the famine could have been mitigated and thus its existence constitutes a major failure of Stalinist industrialisation.[52] In conclusion, Stalinist industrialisation has been demonstrated to be objectively successful in rapidly transitioning Soviet Russia into an industrial economy, enabling rearmament, as intended by the resolution of the 15th Bolshevik Congress in 1927. Through urbanisation and industrial investment, productivity and output rapidly increased throughout the Soviet Union, allowing for the reallocation of resources into the defence industry. However, the disastrous human impact of collectivisation and dekulakisation demonstrates the calculated malice of Stalin to repress the peasantry, at the expense of agricultural productivity, which served to exacerbate the famine of 1932-33. Therefore, the extent to which Stalin’s policy of industrialisation can be considered a success is limited. Will Kingston-Cox is currently in his 3rd year of a BA in History and Politics at Warwick University. Notes: [1] Anton Cheremukhin, Mihail Golosov, Sergei Guriev, and Aleh Tsyvinski, ‘Was Stalin Necessary for Russia’s Economic Development’, NBER Working Paper 19425, National Bureau of Economic Research (2013), p. 1 [2] S.G. Wheatcroft, R.W. Davies, and J.M. Cooper, ‘Soviet Industrialization Reconsidered: Some Preliminary Conclusions about Economic Development between 1926 and 1941’, The Economic History Review, 39(2) (May 1986), p. 264 [3] 1928-1941; Stalin’s initial five-year plans (1. 1928-1932; 2. 1932-37; 3. 1938-41) [4] Cheremukhin, Golosov, Guriev, and Tsyvinski, ‘Was Stalin Necessary', p. 9 [5] Ibid., p. 26 [6] Michael Ellman, ‘Review: Soviet Industrialization: A Remarkable Success?”, [Review of: Farm to Factory: A Reinterpretation of the Soviet Industrial Revolution by Robert C. Allen], Slavic Review, 63(4) (2004), p. 841 [7] Pyatnadtsatyi s’’ezd VKP(b): Stenograficheskii otchet vol. 2 Moscow: Gos. Izd-vo politicheskoi litry, (1962): 1442, in Michael Ellman, ‘Russia as a great power: From 1815 to the present day Part 1’, Journal of Institutional Economics (2022), p. 13 [8] S.G. Wheatcroft, R.W. Davies, and J.M. Cooper, ‘Soviet Industrialisation Reconsidered: Some Preliminary Conclusions about Economic Development between 1926 and 1941’, The Economic History Review, 39(2) (May 1986), p. 273 [9] Lorimer, Frank, The Population of the Soviet Union: History and Prospects (Geneva, 1946), p. 147 cited in Wheatcroft, Davies and Cooper, 'Soviet Industrialisation Reconsidered', p. 273 [10] Vsesoyuznaya perepis' naseleniya 1926 goda, vol. xxxiv (Moscow, 1930), pp. 120-42, and Itogi vsesoyuznoi perepisi naseleniya SSSR 1959g., svodnyi tom (Moscow, 1962), p. 110, cited in Wheatcroft, Davies and Cooper, 'Soviet Industrialisation Reconsidered', p. 273 [11] Vsesoyuznaya perepis' naseleniya 1926 goda, vol. xxxiv (Moscow, 1930), pp. 120- 42, and Itogi vsesoyuznoi perepisi naseleniya SSSR 1959g., svodnyi tom (Moscow, 1962), p. 110, in Ibid. p. 273 [12] Vsesoyuznaya perepis' naseleniya 1926 goda, vol. xxxiv (Moscow, 1930), pp. 120- 42, and Itogi vsesoyuznoi perepisi naseleniya SSSR 1959g., svodnyi tom (Moscow, 1962), p. 110, in Ibid. p. 273 [13] Alexander Vucinich, ’The Kolkhoz: Its Social Structure and Development’, The American Slavic and East European Review, 8(1), (1949), p. 11 [14] 1928-1941; Stalin’s initial five-year plans (1. 1928-1932; 2. 1932-37; 3. 1938-41) [15] Lewis Siegelbaum and Ronald Grigor Suny, ’Making the Command Economy: Western Historians on Soviet Industrialization’, International Labor and Working-Class History, 43 (1993), p. 68 [16] Richard Moorsteen and Raymond Powell, The Soviet Capital Stock, 1928-1962 (Homewood: Illinois, 1966), p. 364 cited in Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 274 [17] Moorsteen and Powell, The Soviet Capital Stock, p. 182 and pp. 339-340, in Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 274 [18] Moorsteen and Powell, The Soviet Capital Stock, pp. 348-349, in Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 276 [19] Estimated from Vsesoyuznaya perepis' naseleniya 1926 goda, vol. xxxiv (Moscow, 1930), pp. 120- 42, and Itogi vsesoyuznoi perepisi naseleniya SSSR 1959g., svodnyi tom (Moscow, 1962), p. 110 cited in Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 273 [20] Cheremukhin, Golosoy, Guriev and Tsyvinski, ‘Was Stalin Necessary’, p. 19 [21] R.W. Davies, ’Soviet Military Expenditure and the Armaments Industry, 1929-33: A Reconsideration’, Europe-Asia Studies, 45(4) (1993), p. 590 [22] Ibid. [23] Ibid., p. 584 [24] Ibid., p. 582 [25] See Footnote 10 [26] See doztizhenie in Lewis Siegelbaum and Ronald Grigor Suny, ’Making the Command Economy: Western Historians on Soviet Industrialisation’, International Labor and Working-Class History, 43, (1993), p. 65 [27] Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 280 [28] Here, the policy of ’price scissors’ best demonstrates the futility of collectivisation [29] Cheremukhin, Golosoy, Guriev and Tsyvinski, ‘Was Stalin Necessary’, p. 26 [30] Ibid., p. 9 [31] James R. Millar, ’Mass Collectivisation and the Contribution of Soviet Agriculture to the First Five-Year Plan: A Review Article, Slavic Review, 33(4), (1974), p. 764 [32] Alec Nove, An Economic History of USSR 1917-1991, 3rd Ed. (Penguin: New York, 1992), p. 176 in Cheremukhin, Golosoy, Guriev and Tsyvinski, ‘Was Stalin Necessary’, p. 27 [33] Cheremukhin, Golosoy, Guriev and Tsyvinski, ‘Was Stalin Necessary’, p. 26 [34] Moshe Lewin, The Making of the Soviet System: Essays in the Social History of Interwar Russia (New York: New Press, 1985), p. 257 in Siegelbaum and Suny, ’Making the Command Economy', p. 68 [35] Nicolas Werth, ’Dekulakisation as mass violence’, Mass Violence and Resistance – Research Network, (2011) https://www.sciencespo.fr/mass-violence-war-massacre-resistance/en/document/dekulakisation-mass-violence.html (last accessed 25th March 2023) [36] Wheatcroft, Davies, and Cooper, ‘Soviet Industrialisation Reconsidered', p. 284 [37] Ibid. [38] Ibid. [39] Ibid. [40] Whilst there is intense scholarly debate as to whether the famine of 1932-33 (Holodomor) constitutes a genocide on the Ukrainian people, this paper does not seek to pass judgement on this question – for the purposes of this paper, the human cost of the famine, consequential of Stalin’s policies, is assessed only [41] Andrei Markevich, Natalya Naumenko, and Nancy Qian, ’The Causes of Ukrainian Famine Mortality, 1932-33', NBER Working Paper 29089, (2021), p. 1 [42] R.W. Davies, and Stephen G. Wheatcroft, ’Stalin and the Soviet Famine of 1932-33: A Reply to Ellman’, Europe-Asia Studies, 58(4), (2006), p. 628 cited in Michael Ellman, ’Stalin and the Soviet Famine of 1932-33 Revisited’, Europe-Asia Studies, 59, (2007), p. 664. [43] R.W. Davies, and Stephen G. Wheatcroft, The Years of Hunger: Soviet Agriculture 1931-1933, (Basingstoke: Palgrave Macmillan, 2010), p. 476. [44] Ellman, ’Stalin and the Soviet Famine', p. 679. [45] Hiroaki Kuromiya, ’The Soviet Famine of 1932-1933 Reconsidered’, Europe-Asia Studies, 60(4), (2008), p. 665. [46] Ellman, ’Stalin and the Soviet Famine', p. 665. [47] Ibid. [48] Ibid. [49] Werth, ’Dekulakisation as mass violence' [50] Ellman, ’Stalin and the Soviet Famine', p. 666 [51] Ibid., p. 673 [52] See footnote 44 [i] Albeit a minority of peasantry lived like this, the usage of zemlianki highlights a decrease in the peasantry's standard of living
- Words That Bent Space: János Bolyai And His Failed Epistolary Exchange With Carl Friedrich Gauss
I cannot say more, only that from nothing I have created a new different world János Bolyai, letter to Farkas Bolyai describing his discovery of non-Euclidian geometry3 November 1823[1] Of all the enduring mysteries of life, the nature of space stands pre-eminent. To know whether our cosmos extends infinitely devoid of curvature, or whether it unfurls to the gentle arcature of a saddle might appear an esoteric indulgence, but humanity has long demonstrated a profound thirst for resolving this mystery. Sadly, our pursuit of knowledge has often been hindered by failures of communication among scientists. This essay examines a particularly lamentable example of such a failure. Exploring the limited interactions between János Bolyai, a brilliant 19th-century Hungarian mathematician, and the renowned Carl Friedrich Gauss, this essay will demonstrate how a matter as simple as a letter that the recipient found offensive was enough to rob the world of thousands of pages of illustrious mathematical thinking, stalling progress in the fields of geometry, mathematics and number theory for decades if not longer. This essay will also address the historical context of their interactions. It will also address the importance of clear communication in the sciences, as well as the broader consequences of their failed epistolary exchange. We will also draw parallels between the paradigm shifting communications between Werner Heisenberg, Wolfgang Pauli, and Niels Bohr only a century later. In doing so, the essay will highlight how essential the sharing of ideas and findings is to the collective growth of scientific understanding. An Epistolary Failure of Great Consequence From Sumerians with their primitive astrolabes[2] to the sophisticated European Euclid Space Mission of 2023, humanity has spared little effort in its endeavor to understand the nature of space. The discovery of non-Euclidian geometry, the study of curved surfaces, is widely regarded as one of humankind’s greatest mathematical feats,[3] not least because of the immense implications it has had for our understanding of the space. During the first half of the 19th century, three mathematicians, Nikolai Ivanovich Lobachevsky, Carl Friedrich Gauss and János Bolyai, would independently develop mathematical frameworks for non-Euclidian geometry. While all three would be connected through mutual friendships and acquaintances,[4] fate conspired to prevent any direct communication about their findings. To make matters worse, the communications that did occur between them would serve to hinder the progress of science. János Bolyai would be the first to write down his framework in 1823.[5] He would publish it, with the help of his father, a decade later. While Lobachevsky would be connected through mutual acquaintances, he would never learn of Bolyai’s work before publishing his own framework in several years after Bolyai.[6] Gauss, the most eminent figure of the three, [7] would receive a copy of Bolyai’s findings, but he himself never published on the topic. Through his response to the copy he received, Gauss would, however, offend Bolyai[8] to such a degree that he would never publish a single page again. As a result, the communication failures of Bolyai, Gauss and Lobachevsky would not only stall progress in non-Euclidian geometry but also deprive the world of an entire corpus of deeply inspired mathematical writing which Bolyai would decide to hide from the world. János Bolyai: A Demon Sprung Upon the Field of Mathematics and Geometry János Bolyai was born on 15th December 1802 in Hungary to mathematician and professor of philosophy at the College of Kolozsvár Farkas Bolyai and his wife Zsuzsanna Benkö. The younger Bolyai would prove to be a mathematical genius par excellence. In addition, he demonstrated remarkable talent in a wide range of pursuits from playing the violin to the martial art of dueling which he perfected during his lengthy military career.[9] His most enduring legacy was to be in mathematics, a discipline to which he ‘sprang like a demon’,[10] as his father attested. Bolyai’s independent discovery of non-Euclidian geometry at the tender age of 21[11] was a feat of extraordinary magnitude, not least because it resolved one of the longest standing mathematical problems concerning Euclid’s axioms which we will discuss below. Bolyai would present his findings in a 26-page appendix to his father’s mathematics textbook, Tentamen,[12] in 1832. From there, he would continue to produce more than 10,000 pages of mathematical manuscripts that spanned algebra, number theory, [13] and the elucidation of Fermat's theorem on primes[14] to name a few. None of these following pages would be published.[15] In fact, little would be known of how far ahead the rest of the scientific community he was without the work of Paul Stäckel at the turn of the 20th century[16] and the effort of modern mathematical historians who have explored Bolyai’s unpublished corpus over the past decades.[17] To give a sense of the magnitude of the loss Bolyai’s withdrawal caused, consider the fact that his appendix to Tentamenwould be cited as the “most glorious 26 pages of Hungarian science.”[18] What drove Bolyai to reject the scientific community is all contained in a solitary letter to Bolyai senior from Gauss. Before we explore the exchange in more detail, it behooves us to examine the scientific milieu in which Gauss and Bolyai toiled. A Flat World Rendered Asunder From 300BC to the Enlightenment, Euclid’s Elements[19] and the 13 books it comprises were the defining treatise of mathematics and plane geometry. In Elements, Euclid lays the foundations for the entire field of geometry through five axioms[20] from which every other feature, notion and geometrical outcome would be derived. Of these axioms, it is the fifth and final one, the parallel postulate,[21] that Bolyai would ultimately dismantle in a revolutionary stroke of genius.[22] Generations of mathematicians before Bolyai, such as Girolamo Saccheri and Johann Heinrich Lambert, had attempted to formally prove Euclid’s fifth axiom without success.[23] What Bolyai demonstrated was that the parallel postulate was not a strict necessity for creating a consistent mathematical framework of geometry and space.[24] Instead, a geometry where parallel lines intersect is possible, as long as we boldly dismiss Euclid’s fifth axiom entirely. In 1823, Bolyai stood out as the solitary mathematician who had recorded his attempts at disproving the parallel postulate. In doing so he created a mathematical framework for future generations of geometers, physicists and mathematicians to explore the very the nature of space, gravity and much more. Bolyai himself seemed aware of the significance of his findings when in one autumn letter he wrote to his Father that “I cannot say more, only that from nothing I have created a new different world”.[25] The Words That Would Bend Space Bolyai senior was a close acquaintance of Gauss.[26] Upon the publication of the 1832 version of Tentamen that included his son’s appendix on non-Euclidian geometry, Bolyai senior dispatched a copy to Gauss in a bid for his reaction. Unbeknownst to the Bolyais, Gauss had also begun to dream of a new and different world where parallel lines could intersect. On 6 March 1832, Gauss responded to Bolyai senior with a letter that discussed János Bolyai work as follows: “If I start by saying “I cannot praise it” then you will most likely be taken aback; but I cannot do otherwise; to praise it would be to praise myself; the entire contents of the work, the path that your son has taken and the results to which it leads, are almost perfectly in agreement with my own meditations, some going back 30 – 35 years. In truth I am astonished.”[27] The response thoroughly dejected János Bolyai,[28] who would never publish again. Communications in science presents itself with two faces: one oriented towards the external world and its laymen, the other staring deep into the eyes of contemporary, past, and future peers. That scientists succeed in the latter is an existential concern to all of us. In fact, communication between scientists is in itself progress in science. The ways in which the acts of communication can happen are myriad, ranging from letters to colloquia, conferences and in-person cooperation. What matters most is that the communication happens. Ideally, it would happen in a timely, open and respectful manner so that scientists can effectively build a shared knowledge base to leverage in their work.[29] Consider for a moment the alternative; a world where scientists toil away on their lonesome, learning of each other’s work and discoveries only by accident if at all. Such would be a world of persistent stagnation punctured only by sudden saltations of scientific development whenever a once-in-a-generation genius was lucky enough to push our epistemological boundaries; a world, which János Bolyai, Lobachevsky and Gauss inhabited when working on their frameworks for non-Euclidian geometry. An Example of Communications and Collaboration as a Driver of Scientific Progress A bevy of prior literature has examined the intersection of communications, science and history. From examining the birth of the scientific article in the 17th century[30] to exploring Newton’s exchange of letters with his peers[31], the historical discourse shows that communications between scientists has often been a critical driving force of scientific progress. One of the greatest examples of the transformative power of open communications and collaboration between scientists is the exchanges between Werner Heisenberg, Wolfgang Pauli, and Niels Bohr[32] whom would come to build the entire field of modern quantum mechanics. Heisenberg, born in 1901, would mirror Bolyai’s precocious productiveness by independently creating the foundations for modern quantum mechanics by the age of 26.[33] Heisenberg's renowned uncertainty principle[34] emerged from his work to formulate a precise mathematical framework way to represent the quantum states and energies of electrons. However, it would be misguided to represent the accomplishment as being Heisenberg’s alone. Shortly after having made his initial discovery in February 1927, Heisenberg wrote a letter to Wolfgang Pauli[35] who was a distinguished professor of physics at the University of Hamburg at the time. In stark contrast to what occurred between Gauss and Bolyai, Pauli not only welcomed Heisenberg’s results but also worked closely with him[36]to present the world with Heisenberg’s work through an article that was published in March 1927.[37] Heisenberg also collaborated closely with Bohr, who was decades his senior and widely considered a leading figure in their field.[38] Where the letters between Heisenberg and Pauli were mostly technical in their orientation and at times brutal in its criticism,[39] the exchanges between Heisenberg and Bohr were markedly more philosophical and more conceptual in nature.[40] The trio’s exchange of letters and ideas between the three continued for decades, resulting in more than 800 pages of writing[41] between Pauli and Heisenberg alone. The collaboration between these three scientists would prove instrumental. Without the mentorship, guidance and constructive critique of his elder peers, it seems unlikely that Heisenberg’s discovery would have blossomed into the foundations of what is now known as the Copenhagen interpretation of quantum mechanics.[42] Truly, successful communication between scientists can change both the world and our understanding of it. The World That Could Have Been Let us now turn our attention back to Bolyai whom we left despondent in 1832 after Gauss’s seeming dismissal of his greatest discovery. Where Heisenberg conversed with Pauli and Bohr directly, Bolyai never interacted with Gauss or Lobachevsky himself. Gauss would come to know of János Bolyai’s work only because of Bolyai senior’s persistence in sending copies of Tentamen and its appendix over several years.[43] Bolyai senior is also to thank for relaying Gauss’ response to János. In assessing the failures of this particular epistolary exchange, it is critical to note that while János Bolyai most certainly read Gauss’ response, he was not the intended recipient. Neither was the letter written in his native language of Hungarian. Knowing what we know of young and prideful egos, and acknowledging the subtleties of translating Gauss's letter from German to Hungarian, it becomes impossible to dismiss the notion that Bolyai may have simply misinterpreted Gauss’ response. True, Gauss stated that he could not praise Bolyai’s work. However, Gauss also exclaimed that “In truth I am astonished.” Gauss also drew direct parallels between the work of himself and the younger Bolyai – an act that could just as well be flattery as well as dismissal. A letter Gauss wrote to his friend Christian Ludwig Gerling on February 14, 1832 gives further credence to this alternative interpretation of Gauss’ initial response. In this letter, Gauss explains that “I regard this young geometer Bolyai as a genius of the first order”.[44] One can only imagine the scientific progress that could have taken place had János Bolyai been privy to the contents of Gauss’ letter to Gerling. Perhaps the world would still be reaping the fruits of an intellectual partnership between Gauss and Bolyai that would have transformed an unpublished corpus into staples of science. Perhaps we would be recounting the collaboration between Gauss, Bolyai and Lobachevsky instead of Heisenberg, Bohr and Pauli as the prime example of success in communications between scientists. But alas, that world would never come to pass, and the world that could have been will forever remain a letter’s width away from our reach. T. Alexander Puutio is currently undertaking an MSt in History at the University of Cambridge (Wolfson College). Notes: [1] János Bolyai, ‘Temesvár Letter from János to Farkas Bolyai’, Translated by Péter Körtesi, November 3, 1823. School of Mathematics and Statistics, University of St. Andrews, Scotland, MacTutor. https://mathshistory.st-andrews.ac.uk/Extras/Bolyai_letter/. [2] G. Çağırgan, ‘Three More Duplicates to Astrolabe B’, Belleten, Vol. 48, No. 191-192 (1984), pp. 399-416. [3] George Bruce Halsted, ‘Gauss and the Non-Euclidean Geometry’, The American Mathematical Monthly, Vol. 7, No. 11 (1900), p. 247. [4] J.J. O’Connor and E.F. Robertson, ‘Nikolai Ivanovich Lobachevsky’, MacTutor History of Mathematics Archive, https://mathshistory.st-andrews.ac.uk/Biographies/Lobachevsky/. [5] George Bruce Halsted, ‘Biography: John Bolyai’, The American Mathematical Monthly, Vol. 5, No. 2, 1898, pp. 35–38. [6] Valentin A. Bazhanov, ‘Nikolay Ivanovich Lobachevsky’, Encyclopedia Britannica (2023), https://www.britannica.com/biography/Nikolay-Ivanovich-Lobachevsky. [7] W.K. Bühler, ‘Gauss: A Biographical Study’, Springer-Verlag (1981). [8] ‘Bolyai János’, Slovak University of Technology in Bratislava (n.d.). [9] Halsted, ‘Biography: John Bolyai’. [10] Ibid. [11] Ibid. [12] János Bolyai, ‘Appendix Explaining the Absolutely True Science of Space’, Farkas Bolyai (ed.), Tentamen (Transylvania, 1832), [13] See e.g. Róbert Oláh-Gál and Alexandru Horvath, ‘Deep Geometrical Thoughts from Some – Until Now Not Published – Manuscripts of János Bolyai’, Proceedings of the 3rd Conference on the History of Mathemetics and Teaching of Mathematics, University of Miskolc (2004), pp. 65-75.; and Elemér Kiss, Mathematical Gems from the Bolyai Chests: János Bolyai’s Discoveries in Number Theory and Algebra as Recently Deciphered from His Manuscripts (Budapest: Akadémiai Kiadó, 1999) [14] Elemér Kiss, ‘Fermat's Theorem in János Bolyai's Manuscripts’, Mathematica Pannonica, Vol. 6, No. 2 (1995), pp. 237-242. [15] Morris Kline, Mathematical Thought from Ancient to Modern Times (Oxford: Oxford University Press, 1972) [16] Paul Stäckel and Friedrich Engel, Die Theorie der Parallellinien von Euklid bis auf Gauss; eine Urkundensammlung zur Vorgeschichte der nichteuklidischen Geometrie (Leipzig: B.G. Teubner, 1895) [17] ‘Bolyai János’, Slovak University of Technology in Bratislava. [18] Elemér, Mathematical Gems. [19] Euclid, ‘Elements’, Translated by Hypsicles of Alexandria (Venice: Erhard Ratdolt, 1482) [20] D.M.Y. Sommerville, The Elements of Non-Euclidean Geometry (2005) [21] R. Ravindran, ‘Euclid’s Fifth Postulate’, Resonance, Vol. 12 (2007), pp. 26-36. [22] A. Prékopa, ‘The Revolution of János Bolyai’, in A. Prékopa and E. Molnár (eds.), Non-Euclidean Geometries: János Bolyai Memorial Volume: 581 (New York: Springer, 2006) [23] Jeremy Gray, In: Worlds Out of Nothing: A Course in the History of Geometry in the 19th Century (New York Springer, 2010) [24] János Bolyai, ‘Appendix Explaining the Absolutely True Science of Space’, in Bolyai, Tentamen. [25] Bolyai, ‘Temesvár Letter’. [26] J.J. O’Connor and E.F. Robertson, ‘Farkas Bolyai’, MacTutor History of Mathematics Archive https://mathshistory.st-andrews.ac.uk/Biographies/Bolyai_Farkas/. [27] Halsted, ‘Gauss and the Non-Euclidean Geometry’, p. 247. [28] Oláh-Gál and Horvath, ‘Deep Geometrical Thoughts’. [29] National Academies of Sciences, Engineering, and Medicine, Communicating Science Effectively: A Research Agenda, Division of Behavioral and Social Sciences and Education, Committee on the Science of Science Communication, ‘Building the Knowledge Base for Effective Science Communication’, National Academies Press (2017). [30] Gross, Alan G., Joseph E. Harmon, and Michael Reidy, Communicating Science: The Scientific Article from the 17th Century to the Present (Oxford: Oxford University Press, 2002) [31] H.W. Turnbull (ed.), The Correspondence of Isaac Newton: 1661-1675 (Cambridge: Cambridge University Press, 1959) [32] See: Wolfgang Pauli, Wissenschaftlicher Briefwechsel Mit Bohr, Einstein, Heisenberg U.A. Bands I-IV 1919-1954 (Berlin: Springer Berlin, Heidelberg 1979-1999) [33] David Cassidy, Uncertainty: the Life and Science of Werner Heisenberg (New York: Palgrave, 1992) [34] Jan Hilgevoord and Jos Uffink, ‘The Uncertainty Principle’, The Stanford Encyclopedia of Philosophy (2023) https://plato.stanford.edu/entries/qt-uncertainty/. [35] ‘February 1927: Heisenberg's Uncertainty Principle’, APS News, Vol. 17, No. 2 (2008) https://www.aps.org/publications/apsnews/200802/physicshistory.cfm. [36] Thayer Watkins, ‘The Drama in the Development of Quantum Mechanics in 1926-27’, San José State Universityhttps://www.sjsu.edu/faculty/watkins/quantumdrama.htm. [37] W. Heisenberg, ‘Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik’, Zeitschrift für Physik, Vol. 43 (1927), pp. 172-198. [38] Arne Schirrmacher, ‘Bohr's Atomic Model’, in Daniel Greenberger, Klaus Hentschel, and Friedel Weinert (eds.), Compendium of Quantum Physics (New York: Springer, 2009) [39] Cecilia Jarlskog, ‘Correspondence With Heisenberg Through Pauli’ in Cecilia Jarlskog (ed.), Portrait of Gunnar Källén: A Physics Shotting Star and Poet of Early Quantum Field Theory (New York: Springer 2014). [40] Jagdish Mehra, ‘Niels Bohr's Discussions with Albert Einstein, Werner Heisenberg, and Erwin Schrödinger: The Origins of the Principles of Uncertainty and Complementarity’, Found Phys, Vol. 17 (1987), pp. 461-506. [41] Cassidy, Uncertainty. [42] Carlo Rovelli, Helgoland: Making Sense of the Quantum Revolution (London: Penguin, 2021) [43] ‘Life of János Bolyai’, Hungarian Academy of Sciences (n.d.) [44] J.J. O’Connor and E.F. Robertson, ‘János Bolyai’, MacTutor History of Mathematics Archive https://mathshistory.st-andrews.ac.uk/Biographies/Bolyai/.
- Cultivating Insanity: The Role of Culture in Understanding Mental Illness in Nigeria, Kenya and China during the 19th and 20th Centuries
To what extent are mental illnesses the product of human culture? Western practices have ensured that psychiatrists are now able to recognise disorders consistently worldwide. Nevertheless, this "culture-blind" approach, which denies the importance of ethnicity and culture in psychiatry often in favour of biological uniformity, is problematic. There is no doubt that culture dictates behavioural normality and deviance. It influences the self-perception of both the doctor and the patient, shaping their understanding of shared issues and defining how these issues are communicated. Dr Mario Hernandez and others have argued that culture directly influences what is defined as a problem and how it is understood. [1] Similarly, Michel Foucault critiqued psychiatry by portraying it as a form of social control. In his eyes, mental illnesses were constructed by society as labels that enabled the 'norm' to marginalise the 'deviant.' [2] This essay examines two case studies of how culture shapes and influences perspectives on mental disorders, as well as one criticism of this approach. Firstly, the essay will investigate how Western superiority impacted the inaccurate outcomes of depression diagnoses in British colonial Nigeria and Kenya. Additionally, attention will be directed towards unravelling the roots of Chinese madness, intricately tied to its belief systems and social structure, as elucidated by Western psychiatry during the nineteenth and twentieth centuries. In essence, this essay explores the impact of culture on mental illnesses, highlighting its pivotal role in their production and conceptualization. In discussing cultural influences on mental illness, it is crucial to recognize the broad scope of the term 'culture,' encompassing beliefs, norms, and values that give rise to shared meanings among diverse groups. [3] Their perceptions of reality and behaviours are profoundly shaped by these attributes. Furthermore, it is important to note that the cases examined in this essay and the conclusions drawn from them do not universally represent all of humanity. Variations may arise based on the specific location and period under consideration. The emergence of psychiatry as a medical speciality in nineteenth-century Europe played a pivotal role in shaping its perspectives and establishing recognized norms. Psychiatry developed within a philosophical context that underscored the separation of mind and body, known as substance dualism, first introduced in the seventeenth century by Rene Descartes, and the importance of the scientific method. [4] While most cultures blended the “supernatural” and the “natural,” the Western approach diminished the validity of spirituality in medicine. Concurrently, racist ideologies became intertwined with psychiatry. Charles Darwin's theory of the continuous evolution of species, outlined in his 1859 work On the Origin of Species, and his concept of natural selection, fostered the belief in the supposed superiority of white races, particularly those of European descent.[5] Darwinism provided an excuse for racial hierarchies based on scientific reasoning, in keeping with Western society's preference at the time. [6] The expansion of European powers, asserting colonial dominance over Asia and Africa, further solidified and reinforced these ideologies. Thus, through the Darwinian idea of evolution, the myth of white superiority became synonymous with the perceived superiority of the Western psyche, that is, the Western "mind". At the start of the twentieth century, Sigmund Freud introduced a new perspective on the human mind, conceptualising it as composed of three components within the human psyche. Freud distinguished the id, the ego, and the superego. The id oversaw basic and immediate desires, the ego was more strategic, securing satisfaction long-term, and lastly, the superego, the only one learned from society, was a repressive part that strived for perfection, internalising feelings and desires, becoming the source of shame or guilt. [7]Though these theories were not originally intended as a foundation for racial discourse, it is pertinent to revisit them for this essay. Depression in 19/20th century colonial Africa In 1835, James Prichard asserted that insanity in "savage" states, such as African and Native American tribes, was extremely rare, if not unknown. [8] In his view, the phenomenon of mental illness originated with civilization, a quality that, in his belief, these tribes did not possess. [9] Similarly, John C. Carothers in his Study of Mental Derangements in Africans […] (1947), offered cultural explanations for the apparent absence of depression in Africa. In Kenya, Carothers observed patients at the Mathari Mental Hospital and attributed their perceived absence of insanity to a relative lack of socio-economic pressures experienced by Kenyans and Africans compared to Europeans. [10] According to Carothers, “African culture” did not necessitate self-reliance, personal responsibility, or initiative. [11] Africans were said to be predisposed to attribute blame to external factors and place responsibility externally rather than internally. [12] Carothers' assertions influenced his assessment of depression, which he considered non-existent or "genetically absent." Despite exhibiting similarities to typical depression, many people were diagnosed with manic-depressive psychosis instead of depression. [13] Carothers dismissed the concept of unnoticed depression and instead attributed the “unusual” behaviours to a lack of personal responsibility, and with it an absence of guilt, regret, and foresight. [14] In alignment with Freudian id, ego, and superego theory, the assertion arose that if Africans lacked experiences of guilt or shame and did not internalize their emotions, their superego, responsible for regulating emotions, was purportedly underdeveloped. Consequently, this reasoning suggested an inferiority in the psyches of Africans compared to Westerners. Concurrently, depression came to be perceived as a "civilized" illness, affecting only populations adhering to European principles. Carothers' "absence" of depression in African communities has been discredited by the argument that he simply did not encounter depressed Nigerians. It is crucial to underscore that his studies were conducted on individuals who were admitted to Mathari Mental Hospital and certified as insane. Gaining admission to this institution was not easily attainable for all peoples exhibiting mental health issues. For a patient to be classified as "insane," the procedure involved a 14-day detainment period for comprehensive observation under the purview of a magistrate. After this period, medical officers determined the individual's mental state, discerning whether it fell within the spectrum of sanity or insanity. [15] Those who were placed in hospitals and asylums, facilities intended for the confinement of individuals considered dangerous or violent, were the ones who demonstrated overtly aggressive and disruptive behavioural patterns. Conversely, individuals experiencing depression would likely have undergone traditional healing practices within their homes and communities, a dimension overlooked by colonial psychiatry. [16] Moreover, the Western psychiatry of the twentieth century necessitated the internalization of aggression and self-deprecation for the diagnosis of depression. [17] By denying that Africans experienced those feelings in the first place, colonial psychiatrists found justification to question the existence of depression in Africa. In contrast to colonial perceptions, depression was significantly more prevalent. The twentieth-century Nigerian psychiatrist Thomas Adeoye Lambo, attributed these misleading reports to widespread misclassifications and a failure to understand typical symptoms in African patients. This was primarily due to disparities between the reality of the situation and the criteria sought and accepted by colonial psychiatrists. In Lambo's research, in numerous instances, Nigerians outwardly appeared more agitated and anxious, and their external demeanour did not align with their true emotional states, which they chose not to disclose. [18] Therefore, Carothers's error lay in approaching Nigeria solely from a Western perspective. These conclusions align with the findings reported by researchers from the Cornell-Aro Mental Health research project in 1963. While investigating psychiatric disorders and their socio-cultural context among the Yoruba people of West Africa, they observed that many symptoms of depression, such as fatigue or sadness, were indeed present but largely went unreported. [19] They noted that depression, commonly understood by Western societies, was an unfamiliar concept to the Yoruba people and linguistic challenges arose when attempting to articulate the disorder in native terms. This underscores the importance of considering cultural context in discussions about mental illness. Various cultures may manifest unfamiliar and previously unrecognized symptoms and encounter challenges in articulating them due to linguistic and cultural disparities with foreign medical terminology. The diagnosis of depression in Nigeria and Kenya during the nineteenth and twentieth centuries serves as a compelling example of how adopting a cultural perspective can result in misconceptions about mental disorders when applied within a different tradition. This case study illustrates how colonial psychiatrists applied racist doctrines to their examinations of patients and how their perspectives on what constituted normality and abnormality, as well as superiority and primitivity, clashed with the indigenous worldview. This analysis shows that mental health and culture are intricately connected, with the former influencing the latter. A case of madness in 20th-Century China To understand mental health perspectives in China, particularly pre-1930s and post, it's crucial to highlight key aspects of Confucian philosophy - a deeply influential religious philosophy in Chinese history. This philosophy, emphasizing balance, plays a significant role in shaping approaches to mental illness, incorporating both professional and popular perspectives. The Chinese believed that preserving a healthy balance between yin and yang, positive and negative forces, was essential for maintaining a healthy body. [20] Thus, mental disorders stemmed from disruptions in this delicate harmony rather than any defects in the brain. The brain seldom played a significant role, as thought was perceived to originate in the heart. [21] Instead, various other causes were proposed, starting with a weakening of qi (the vital energy), an excess of emotion, or even demonic possession. Importantly, as Charlotte Ikels noted, Confucianism underscores the profound internalization of emotions. Followers believed that suppressing issues until reaching the point of ignorance is required to foster self-control and self-resolution. [22] Consequently, individuals deemed 'morally disturbed' often faced social rejection, leading to confinement within their homes, away from public view. [23] The deep-seated shame and fear of exclusion had a profound impact, particularly on East Asian cultures and their family dynamics. Traditional beliefs indicated that mental illness was perceived as a punishment for ancestral sins, and the resulting shame extended to the entire family, not just the individual who suffered. [24] Even in modern times, Asian American patients exhibit selective reporting of symptoms, showing a preference for physical symptoms over emotional ones and expressing themselves in culturally acceptable ways. [25] Until the second half of the nineteenth century, legal responsibility for the care of the mentally ill primarily rested within the family and kin. [26] Madness posed a potential threat to public safety, and it became the family's responsibility to manage the mentally ill as a private liability. Domestic confinement was a commonly employed practice. In 1853, Dr John Kerr, an American medical missionary representing the American Presbyterian Board of Foreign Missions, arrived in Canton. Having received medical education in America, his perspectives diverged from Chinese customs. Unlike the prevalent notion of families being competent to take care of the "insane," he viewed them as potential sources of insanity. Dr Kerr aimed to establish a "proper" psychiatric practice. In 1891, he inaugurated the first Chinese asylum, named the John G. Kerr Refuge for the Insane. This marked the introduction of a significantly unfamiliar medical and cultural practice. [27] Kerr also highlighted the correlation between mental illnesses and Chinese cultural stresses, focusing particularly on women within the Chinese marriage and concubinage systems. In "A Daughter of Han [...]," set in 1870s P’engali, the narrator recounts a poignant tale in which her sister was routinely mistreated by her husband and his family. As per Chinese tradition, married women frequently stood in gateways during the evenings to observe the streets. Nevertheless, on one fateful day, distressed following a quarrel, the sister is said to have left her residence and started wandering through the city. The narrator notes that many people gathered to see the sister rather than help her. The sister's behaviour deviated from the established traditions of her society and that eventually earned her the moniker of ‘the crazy woman.’[28] Her actions, openly contradicting societal expectations, led to her being perceived as disruptive and labelled as insane by the public, not because she visibly suffered from any mental illness. Contrarily, no correlation was suggested between the sister's mistreatment and the potential stress from conforming to traditional expectations impacting her mental state. To observers, the "outlandish" behaviour was synonymous with descending into madness. This underscores why psychiatrists like John Kerr, beyond advocating for changes in practices, recognized the imperative need for social reform that would contribute to the enhancement of society's mental health. While in Chinese society, a deviation from the norm was perceived as madness, Western psychiatrists attributed it to the traditional Chinese family model and patriarchal oppression. Nevertheless, it cannot be denied that Chinese culture, its philosophies, and beliefs in the late nineteenth and early twentieth centuries not only influenced what was perceived as a mental illness but also played a role in its conceptualization and creation. The modern universalist approach to mental illness The theory that cultures determine mental illnesses has faced scrutiny. Following Nigeria's independence in the 1960s, Nigerian psychiatrists advocated for a universalist approach to mental illnesses. By demonstrating that Nigerians experience depression on a scale similar to Europeans, Thomas A. Lambo argued that these similarities transcend race and culture. He asserted that all humans are psychologically equal worldwide. [29] It is evident that cultural contexts still exerted a strong influence on those claims. Such conclusions held huge significance for the global decolonization movement and Nigeria's integration into the "modern" medical scene. It was this principle of equality that resulted in the de-pathologization of individuals previously deemed inferior based on their ethnicity. [30] Universality was as much of a political statement as a scientific one. In the present era, universalists stress the role of biology in unifying mental illnesses globally. Eric R. Kandel suggests that while differences in manifestations may arise cross-culturally, the underlying biological issue of mental illness remains consistent. [31] Mark Winton and Vikram Patel explained that, from a medical standpoint, mental illnesses, akin to infectious diseases, are part of a "universal human experience." If corresponding manifestations are found across cultures, then the reason must lie in genetic factors. [32] Nevertheless, this argument is not entirely persuasive and necessitates further conclusions that have not yet been universally agreed upon. An overemphasis on biology neglects environmental and cultural factors and implies that the human mind is purely physical. Accepting this approach would necessitate reducing other mental experiences, such as memories, wishes, etc., to nothing more than physical processes in the brain. Conclusion The "culture-blind" belief that mental illnesses affect everyone identically, regardless of ethnicity or background, does not hold. Preconceptions about mental disorders did not arise in isolation; thus, psychiatry needs to explore the impact of cultural influences on mental health. Society and politics consistently shape the individual by defining the concept of "normality" and identifying deviations from it. Examples from Nigeria, Kenya, and China in the nineteenth and twentieth centuries align with Foucauldian theories, viewing madness as a product of societal constructs. Western psychiatry became inextricably influenced by prevalent racist doctrines at the time of its emergence, incorporating them into the treatment of minds in colonial contexts. The traditional Chinese expectations of behaviour, coupled with Confucian suppression, had an impact on the mental well-being of members of Chinese society. Recognizing the intricate tapestry of mental health requires acknowledging the nuanced interplay between biological and socio-cultural dimensions. Each thread contributes to the rich fabric of human experience, and oversimplifying by neglecting either aspect undermines our ability to embrace the complexity of mental well-being. As we unravel these intricacies, we pave the way for a more holistic and compassionate approach to mental health. Sandra Liwanowska is currently undertaking an MPhil in the History and Philosophy of Science and Medicine at the University of Cambridge. Notes: [1] Mario Hernandez and others, ‘Cultural Competence: A Literature Review and Conceptual Model for Mental Health Services’, Psychiatric Services (Washington, D.C.), Vol. 60, No. 8 (2009), p. 1047. [2] Michel Foucault, Madness and Civilization: A History of Insanity in the Age of Reason (London; Sydney: Tavistock Publications, 1967), pp. 44-78, particularly: pp. 48-49, p. 55, p. 73. [3] 'Culture', in Oxford Dictionary of English, 2nd ed. (Oxford: Oxford University Press, 2005) [4] Rene Descartes and Michael Moriarty (trans.), Meditations on First Philosophy with Selections from the Objections and Replies (Oxford: Oxford University Press, 2008), pp. 1-62. [5] Charles Darwin, On the Origin of Species By Means of Natural Selection Or, the Preservation of Favoured Races in the Struggle for Life (Project Gutenberg, 1998). [6] Steven Rose, ‘Darwin, Race and Gender’, EMBO Reports, Vol. 10, No. 4 (2009), p. 297. [7] Sigmund Freud, ‘The Ego and the ld’, ‘The Ego and the Super-Ego (Ego Ideal)’, in James Strachey (ed.) and Joan Riviere (trans.), The Ego and the Id. The Standard Edition of the Complete Psychological Works of Sigmund Freud, Vol. 24 (W. W. Norton & Company: London, 1960), pp. 11-21, 22-36. [8] The term “African” here and forward categorises a population based on common continental origin. It is a broad term, which bears almost no value as Africa is a continent populated by an enormous number of diverse cultures. However, it will be employed to illustrate the overgeneralization of colonial beliefs. [9] “Among nations existing in a savage state, in which the human mind is uncultivated, and its higher faculties remain undeveloped, it appears that mental diseases are comparatively rare phenomena.” - John C. Prichard, A treatise on insanity and other disorders affecting the mind (1835), p. 198. [10] John C. Carothers, ‘A Study of Mental Derangement in Africans, and an Attempt to Explain Its Peculiarities, More Especially in Relation to the African Attitude to Life’, Journal of Mental Science, Vol. 93, No. 392 (1947), p. 587. [11] Ibid., p. 592. [12] Ibid., p. 581. [13] Ibid., p. 575, 590-591. [14] Ibid., p. 556, 570. [15] Ibid., p. 555. [16] Matthew M. Heaton, Black Skin, White Coats: Nigerian Psychiatrists, Decolonization, and the Globalization of Psychiatry (Athens, OH: Ohio University Press, 2013), p. 102. [17] Ibid., p. 102. [18] Adeoye T. Lambo, ‘Further Neuropsychiatric Observations in Nigeria’, British Medical Journal, Vol. 2, No. 5214 (1960), pp. 1698–1699. [19] The Yoruba people are an ethnic sub-Saharan ethnic group, prevalent in West Africa, including Nigeria; Alexander H. Leighton, T. Adeoye Lambo, Charles C. Hughes, Dorothea C. Leighton, Jane M. Murphy and David B. Macklin, Psychiatric Disorder among the Yoruba: A Report from the Cornell-Aro Mental Health Research Project in the Western Region, Nigeria (Ithaca, NY: Cornell University Press, 1963), p. 112. [20] Emily Baum, ‘Choosing Cures for Mental Ills: Psychiatry and Chinese Medicine in Early Twentieth-Century China’, The Asian Review of World Histories, Vol. 6, No. 1 (2018), p. 16. [21] Ning Yu, The Chinese HEART in a Cognitive Perspective, The Chinese HEART in a Cognitive Perspective (Berlin/Boston: Mouton de Gruyter, 2009), p. 1-3. [22] Charlotte Ikels, ‘The Experience of Dementia in China’, Culture, Medicine and Psychiatry, Vol. 22, No. 3 (1998), p. 275. [23] Lawrence H. Yang., ‘Application of mental illness stigma theory to Chinese societies: Synthesis and new directions’, Singapore Medical Journal, Vol. 48, No. 11 (2007), p. 980. [24] Veronica Pearson, ‘Families in China: An Undervalued Resource for Mental Health?’, Journal of Family Therapy, Vol. 15, No. 2 (1993), p. 166. [25] Keh-Ming Lin, and Freda Cheung, ‘Mental Health Issues for Asian Americans’, Psychiatric Services (Washington, D.C.), Vol. 50, No. 6 (1999), pp. 774–80. [26] Zhiying Ma, ‘An Iron Cage of Civilization? Missionary Psychiatry, The Chinese Family and A Colonial Dialectic of Enlightenment’, in Howard Chiang (ed.), Psychiatry and Chinese History (London; New York: Routledge, 2014), p. 99. [27] Howard Chiang, ‘Introduction: Historicizing Chinese Psychiatry’, in Chiang, Psychiatry, p. 6. [28] Ida I. Pruitt, and Ning Lao T'ai-T'ai, ‘Book One: On the Family’, in A Daughter of Han: The Autobiography of a Chinese Working Woman (Potomac, Maryland: Pickle Partners Publishing, 2015), pp. 31-32. [29] Heaton, ‘Introduction’, p. 13, 20. [30] Ibid., p. 21. [31] Glorisa Canino and Margarita Alegría, ‘Psychiatric Diagnosis - Is It Universal or Relative to Culture?’, Journal of Child Psychology and Psychiatry, Vol. 49, No. 3 (2008), p. 238. [32] Vikram Patel, and Mark Winston, ‘Universality of Mental Illness’ Revisited: Assumptions, Artefacts and New Directions’, British Journal of Psychiatry, Vol. 165, No. 4 (1994), p. 437.
- A Rare Case: Black Women and Hysteria
This essay argues that mainstream psychiatrists and physicians between 1860 to 1900 conceptualised Black women (both those inside and outside psychiatric asylums) as too culturally and physiologically deficient to become hysterical.[1] It examines the racialisation of hysteria, and interrogates how psychiatric theory shaped unequal and segregated care in psychiatric institutions. Hysteria was a term used predominantly by psychiatrists (but also gynecologists, general physicians, and journalists) to describe a form of psychoneurosis defined by symptoms of emotional excitability, irrationally and an “excessive display” of emotions.[2] Over the course of the nineteenth century the usage of hysteria sharply increased coinciding with the medicalisation, and institutionalisation of nervous and hysterical disorders, and the crisis of middle- and upper-class women’s role in the home.[3] As a diagnostic category hysteria did not have strict regulations and contours; George Beard wrote a seventy-five-page entry listing symptoms and displays of hysteria, only to describe it as incomplete.[4] Despite the expansiveness of its associated symptoms, hysteria’s exclusivity resided in the racialisation and gender of the bodies who were seen as “capable” of experiencing the neurosis.[5] Indeed, physicians in psychiatric institutions rarely diagnosed Black women with hysteria. Diana Louis’s work on the Georgia Lunatic asylum observes that its most frequent diagnoses of Black women were “lunatic,” “idiot” or “epileptic,” not hysterical.[6] Significantly, the few ledger entries that did record Black women as hysterical were always accompanied with the modifier ‘violent.’[7] Whilst psychiatrists increasingly broadened the concept of insanity in the 1860s, proposing new definitions that included diseases of the emotions,[8] these developments exclusively served White patients. Indeed, the psychiatric language of diagnosis around Black patients’ neuroses became a re-iteration of the same causal assessment —that freedom was making Black people insane.[9] This narrative had a double function— firstly, it justified the increasing psychiatric incarceration of Black patients as a paternalistic instinct.[10] Secondly, it provided a justification of segregation—the inference being that different mental diseases required varying psychiatric solutions. Superintendents in southern asylums profited off the racialisation of hysteria, presenting unequal distributions of labour and care as a matter of medical strategy.[11] The psychiatric narrative that associated Whiteness with hysteria converged with the claim that African Americans had become increasingly susceptible to insanity after slavery.[12] Psychiatrists oriented their discussion of African American insanity around its etiology, rather than the particularities of its symptoms. Indeed, Kristi Simon understands this as a residual after-effect of the antebellum period were ‘physicians rarely diagnosed slaves with a specific mental disease.’[13]Samuel Cartwright’s 1851 diagnosis of African Americans with two new mental disorders, “Drapetomania”[14] and “Dysaethesia Ethiopica” remained popular until the turn of the century.[15] Drapetomonia was understood as a “disease of the mind”[16] that encouraged enslaved people to run away. He described “Dysaethesia Ethiopica” as harder to cure, and attributed the disease to free Black people who he asserted were unable to function without enslavers.[17] Cartwright argued that this disease was caused by the absence of White enslavers. He suggested they should return to hard labour as a remedy. Whilst hysteria was presented as a complex, multi-casual neuroses, insanity amongst African Americans was understood as embedded in the subject of their freedom.[18] Current historiographical framing has not systematically accounted for, or attempted to understand the moments when Black women were diagnosed as hysterical. For example, Laura Brigg’s essay ‘The Race of Hysteria’ (2000) predominantly focuses on the way in which the concept of overcivilisation constructed Black women as ontologically separate from hysteria.[19] Whilst Briggs account persuasively connects the physiological and psychological understanding of Black women as more robust than White women, she does not attend to the ways in which Black women were diagnosed in psychiatric settings.[20] This essays attempts to extend Briggs argument by considering how psychiatric theory shaped the implementation of psychiatric care. This methodological decision attempts to reconcile two things—that psychiatric theory justified and influenced the racialisation of care, and that the archives become more fragmented as we attempt to focus, and understand the implementation of care. Indeed, African Americans’ presence in psychiatric institutions and private practices is difficult to trace. Peter McCandless argues that in the case of the South Carolina Lunatic Asylum ‘as the population became more numerous, poorer, and blacker […] case records generally became far more perfunctory.’[21] The archive provides us with glimpses into how racialized psychiatric diagnoses mapped onto differences in psychiatric care, and consequently African American’s experience of their own neuroses.[22] Whiteness as a prerequisite for Hysteria This section argues that psychiatric disciplines reproduced, and borrowed from anthropological narratives of overcivilisation in order to conflate hysteria and Whiteness. In Manliness and Civilisation (1995) Gail Bederman offers a useful description of the racial claims that were at stake in the nineteenth century concept of civilisation. She succinctly argues that: Civilization denoted a precise stage in human racial evolution—the one following the more primitive stages of “savagery” and “barbarism.” Human races were assumed to evolve from simple savagery, through violent barbarism, to advanced and valuable civilization. But only white races had, as yet evolved to the civilized stage. In fact, people sometimes spoke of civilization as if it were itself a racial trait.[23] Although civilisation was an exclusive social category, it also presented a crisis. Physicians frequently cited the conditions of civilisation as making civilised women susceptible to hysterical outbreaks.[24] Indeed, the gynecologist Henry W. Streeter neatly distilled this belief when he wrote ‘from the cradle to the grave, every habit of the civilised woman as a class tends to debility.’[25] By contrast, these physicians typically represented Black women as physically strong, aggressive, and too under-civilised to experience hysteria.[26] Relevant here is Hortense Spillers seminal essay ‘Mama's Baby, Papa's Maybe: An American Grammar Book’ (1987) which argues that under transatlantic chattel slavery Black women were ‘Essentially ejected from “The Female Body in Western Culture.’[27] This ejection persisted in the concept of overcivilisation and the highly-gendered and racilised psychiatric category of hysteria. Interestingly, the line between culture and female anatomy was not a particularly neat one, with physicians arguing that evidence of the former could be observed in the latter. For example, physician Robert T. Morris’ ‘Is Evolution Trying to do away with the Clitoris?’ (1892) insisted that the trajectories of biological evolution were compatible with the discourse on cultural evolution. He argued that ‘This condition [the apparent loss of the clitoris] very evidently represents a degenerative process that goes with higher civilisation.’[28] His work was in dialogue with narratives of biological racial difference supplemented by anatomical measurements.[29] In explaining the loss of White women’s sexual impulses as a product of overcivilisation and as a symptom of their evolving genitalia, he mapped the coordinates of cultural evolution onto the anatomical logic of White women’s bodies.[30] Morris then argued that the “savage” women’s capacity for physical labour differentiated them, both physically and psychologically, from their White counterparts. Introducing the example of a typical native Irish woman’s workday, he argued the work would have ‘sent a fragile girl into a madhouse.’[31] Through rendering the category of un-civilised women as better able to withstand physical and mental pressure, he presented them as the ideal labouring class. Morris’ concerns about biological evolution and overcivilisation were thoroughly embedded in the major psychiatric debates of the last quarter of the nineteenth century. Theophilus Powell, who became superintendent of the Georgia Lunatic Asylum in 1879 published prolific amounts of work which repeatedly framed African American emancipation as a potential health epidemic.[32] In ‘The Increase of Insanity and Tuberculosis in the Southern Negro since 1860, and its alliance, and some supposed causes’ (1861) Powell claimed that in 1860 there were only forty-four insane African Americans in the entirety of Georgia.[33] Using anecdotal evidence of conversations with doctors Powell attempted to create the impression that his opinion was part of a broader consensus. Additionally, he drew on ethnology to argue that although rates of insanity were increasing amongst Black people, their place in the human racial evolutionary category prevented them from more complex conditions of ‘brain tension or mental anxiety.’[34] Segregation was framed by Powell as a matter of both mental hygiene, and White paternalism. Indeed, Powell’s conclusion asserted that “civilisation” and its associated illnesses were dangerous to African Americans. Archival evidence from medical journals, and newspapers in the period demonstrate that there was more debate about civilisation and hysteria, than Morris and Powell were willing to engage in, or accommodate. In 1893, two almost identical critiques of the connection between nervous disease and civilisation appeared in The Daily Picuyane (January) and The Phrenological Journal and Science of Health (April). They are so similar that we can assume that they were written by the same author. Both argue that there was actually no evidence that “savages” were not hysterical, and indeed that there were ‘reliable travellers who say that violent and even epidemic nervous disorders are very common among uncivilised people.’[35] These physicians who included Black people in the category of hysteria were not performing an ideologically liberal move to affirm the complexity of Black patients’ interior lives. Rather, they were attempting to preserve the structural and moral integrity of civilisation through citing “savage” displays of hysteria. Psychiatrists diagnosing hysteria appear to have required much more evidence, and more extreme cases, to justify diagnosing a Black woman with hysteria. Crucially, when psychiatrists diagnosed Black patients with hysteria, they almost always framed their diagnoses in comparative terms and described them as more violent, religious, ritualistic, and less feminine.[36] In The Detroit Free Press an article on ‘Negro Shouting’ in 1883 described the scene of a Black woman’s ‘genuine hysteria.’[37] The extremity of the scene here confirms the authenticity of her hysteria. The journalist wrote that ‘by a series of convulsions, leaps, [and] raising [herself] high upward and [then] pulling herself down with a movement so swift she actually seemed a shadow in the air.’[38] They believed that her ‘awful energy’[39] showed greater physical strength than any man, and was generated by a ‘real morbid ecstasy.’[40] Frailty and femininity associated with White women’s symptoms of hysteria, were replaced with physical strength akin to masculinity, and religious perversion. Even when “inside” the category of hysteria, Black women were outside the category of femininity and civilisation.[41] As will be argued in the second part of this essay, this contributed to the arguments for, and distribution of, racialised division of cares within psychiatric institutions. Segregating Diagnoses and Care This section argues that psychiatrists’ attitudes towards both the definition and treatment of hysteria was always mediated by race. Indeed, the administration of treatment and moral therapy in Southern asylums and private practices was structured around race as much as, and sometimes more than, actual psychological diagnoses.[42] This section investigates the resultant dialectic of segregation—that improving the quality of care for White patients was often contingent on worsening conditions for Black patients.[43] In the case of the South Carolina Lunatic Asylum, it is highly evident that the expectations of care Black women could expect to receive— and the length of time they would be psychiatrically incarcerated for—were vastly different from their White counterparts. Dr. James Babcock, the superintendent from 1891 to 1914, retrospectively declared that: I honestly admit that I have paid more attention to the white women here than to any other department, but at the same time I do not mean to apologize for it . . . I think they were entitled to the best we had.[44] The belief that mentally ill White women were inherently worthy of attention, care, and resources shaped the distribution of resources in southern asylums.[45] Babcock’s sentiment here hinges on the word ‘entitled,’ Whiteness itself ensured an approved standard of care and empathy. Psychiatrists, both inside and outside asylums, were consistently resistant to diagnosing Black women with hysteria, even when symptoms constructed a persuasive case for hysteria. In S. Weir Mitchell’s clinical lecture ‘The True and Falsie Palsies of Hysteria’ (1880) Mitchell wrestled with an irrefutable and debilitating case of hysteria, which troubled the neuroses prerequisite of whiteness. Mitchell’s lecture detailed three cases of hysteria in his private practice— Mrs B, (a twenty-year-old “dark skinned rosy looking girl without the least turn to tears or undue emotions”[46]) Mrs L and Mrs C (who were both White). Mitchell presented Mrs B’s severe physical symptoms as disproportionate to her emotional regulation. Weir described Mrs B as being physically disabled by her hysteria— she was both unable to walk (Weir has to teach her to “creep”) and mute for twelve months. Yet, Weir presented the psychological manifestations of her hysteria as manageable, making her case exceptional by clearly demarcating the somatic and the psychological. He wrote ‘I should only have said that her manner was quick and excitable. She certainly had none of the usual furtive look and small defectiveness of a hysterical girl.’[47] His framing of ‘I should only have said’ sets up her physiological condition as so normal that it is hardly worth mentioning. Although Mrs B was diagnosed with hysteria, Mitchell made every effort to present it as a rare case, that was not psychologically complex. He did not offer Mrs B the same treatment he offered Mrs L and Mrs C— a detailed examination of their history and trauma.[48] Rather than refusing to diagnose her with hysteria at all, Mitchell diagnosed her, but continually modified his description in order to stress that Mrs B was physically, rather than mentally inhibited by the neurosis. Whilst specialised care was being developed for White women in asylums, Black women were seen as belonging to another psychiatric category, and were relegated to alternative space. The racialisation of care in Southern asylums can be clearly discerned in viewing photographs from the ‘Annual Report of the Georgia Lunatic Asylum’ between 1895-1896. Whilst superintendent Powell claimed that treatment for White and Black patients were identical, these photographs provide an immediate rebuttal to this claim.[49] Indeed, Figure Three captures a private alcove that was designed for the exclusive use of White women. The image shows a partially filled space that was decorated with paintings, curtains, and centred around a table with flowers. This is particularly striking because as Diana Louis argues, Black women at the GLA were forced to inhabit overcrowded spaces that replicated antebellum quarters for the enslaved.[50] Furthermore, the White women photographed were not dressed in patient uniform, or work clothes. The space they inhabited, and the clothes they wore allowed the asylum to frame them as temporarily outside of respectable society. By contrast, Black women at the GLA did not even have consistent access to sanitary products, or feminine wear. From the outset, the space of the asylum was rendered an exclusively custodial institution for African Americas. As Louis puts it: The conditions of the asylum, including familial separation, excessive labor, white hostility, poor provisions, health risks, and racial hostility, simultaneously echoed the horrors of enslavement and exacerbated post-emancipation challenges to Black health and citizenship.[51] Black female patients had the highest mortality rates in asylums at the turn of the century[52]—and this was a direct result of inadequate facilities, hygienic care, and likely, over exhaustion from extensive work. Treatment and moral therapy were entirely structured by race: White women were encouraged to conduct gentle tasks ranging from gardening, sewing, and socialising. They had access to segregated spaces that attempted to replicate the ambiance and familiarity of their own homes. However, the asylum was not a refuge for African American women who were sent there; it was a custodial institution. The asylum was structured in way that made the possibility of Black women resting impossible. Black patients’ labour was productive for the asylums, and their increasing presence and illness became ideologically useful to physicians asserting the perpetual dependency of Black people. Scarlett Croft has recently completed an MA in African American Studies at Columbia University. Notes: [1] Laura Briggs, “The Race of Hysteria: ‘Overcivilization’ and the ‘Savage’ Woman in Late Nineteenth-Century Obstetrics and Gynecology”, American Quarterly, Vol. 52, No. 2 (2000), pp. 246–73; George Beard, A Practical Treatise on Nervous Exhaustion (New York: William Wood and Company, 1880). [2]Carol S. North, 'The Classification of Hysteria and Related Disorders: Historical and Phenomenological Considerations', Behavioural Sciences, Vol. 5, No. 4, (2015), pp. 496-517. [3] Lois P. Rudnick and Alison M. Heru. ‘The “Secret” Source of “Female Hysteria”: The Role That Syphilis Played in the Construction of Female Sexuality and Psychoanalysis in the Late Nineteenth and Early Twentieth Centuries’, History of Psychiatry, Vol. 28, No. 2 (2017), p. 17. Books often described it in terms of endemic spreading between women, and suggested that people likely to suffer from the psychoneurosis should be separated. [4] Beard, A Practical Treatise, pp. 11-85. (Overcivilisation) [5] As will be discussed at the end of the first section, in exceptional cases Black women were described as hysterical. More robust and extreme symptoms were usually needed to gesture towards such a diagnosis [6] Diana Martha Louis, 'Black Women’s Psychiatric Incarceration at Georgia Lunatic Asylum in the Nineteenth Century', Journal of Women's History, Vol. 34, No. 1 (2022), p. 28. [7] Louis, 'Black Women's', p. 34. [8] Andreas De Block and Pieter R. Adriaens, 'Pathologizing Sexual Deviance: A History', Journal of Sex Research, Vol. 50, No. 3-4, (2013), pp. 276-298. Indeed, hysteria was frequently described as a ‘disease of the mind’ and thought of in terms of endemic spreading between groups of women. [9] Theophilus Powell, 'The Increase of Insanity and Tuberculosis in the Southern Negro since 1860, and its alliance, and some supposed causes', JAMA, Vol. XXVII, No. 23 (1986), pp. 1185; Kristi M. Simon, 'The Controversy Surrounding Slave Insanity: The Diagnosis, Treatment and Lived Experience of Mentally Ill Slaves in the Antebellum South', Master of Arts thesis, (The Florida State University, 2018); Wendy Gonaver, The Peculiar Institution and the Making of Modern Psychiatry, 1840-1880 (Chapel Hill: University of North Carolina Press, 2018), p. 181. [10] Peter McCandless, 'A Female Malady? Women at the South Carolina Lunatic Asylum, 1828–1915', Journal of the History of Medicine and Allied Sciences, Vol. 54, no. 4 (1999), p. 553. [11] Gonaver, The Peculiar Institution, p. 112. [12] See footnote 6. [13] Simon, 'The Controversy Surrounding Slave Insanity'. [14] Cartwright derived the word from the Greek— “drapeto” meaning runaway, and “mania” meaning slave. [15] Samuel Cartwright, 'On the Diseases and Peculiarities of the Negro Race,' DeBow's Review of the Southern and Western States (1851), pp. 331-333. Cartwright became a tenured “Professor of Negro Diseases" at the University of Louisiana. [16] Ibid., p. 333. [17] Christopher D. E. Willobough, 'Running Away from Drapetomania: Samuel A. Cartwright, Medicine, and Race in the Antebellum South', Journal of Southern History, Vol. 84, No. 3 (2018), pp. 579-614; Benjamin Rush, Medical Inquiries and Observations Upon the Diseases of the Mind, Vol. 1, (Philadelphia: Thomas Dobson, 1794), p. 277. [18] Robert Myers, ‘“Drapetomania": Rebellion, Defiance and Free Black Insanity in the Antebellum United States’ (UCLA, 2014.) Retrieved from [19] George Stocking, Victorian Anthropology (New York: Free Press, 1987), p. 312. Stocking offers a discussion of the range of ways cultural evolution was understood in the Victorian period. Cultural evolutionist expanded from classical evolutionist in order to explain the existence of “primitive culture” as fitting into a rational framework of cultural progress. Stocking argues that ‘between 1837-1871 discussions of savages become institutionalised first in ethnology and then anthropology,’ (xxii) coinciding with discourse on cultural evolution, that understood ‘individual atoms of modern society had not yet differentiated out of larger familiar or tribal entities.’ (311). [20] Briggs, 'The Race of Hysteria', p. 246. [21] McCandless, 'A Female Malady?', p. 553. [22] Ian Hacking, “Making Up People,” in Thomas C. Heller, Morton Sosna and David E. Wellbery (eds.), Reconstructing Individualism: Autonomy, Individuality, and the Self in Western Thought (San Jose: Stanford University Press, 1986), pp. 222-236. Relevant here is the Hacking’s idea of “dynamic nominalism,” in which the discursive categories to describe someone actually determines the type of experience they have. [23] Gail Bederman, Manliness and Civilization: A Cultural History of Gender and Race in the United States, 1880-1917 (Chicago: University of Chicago Press, 1995), p. 25. Between 1870 and 1890 physicians reported much higher rates of hysteria and nervous diseases. [24] See for example: Edward B. Tylor, Early History of Mankind and the Development of Civilization (1865), Primitive Culture (1871), and The Origin of Civilization and the Primitive Condition of Man (1870). [25] Henry W. Streeter, 'Some Deductions from Gynaecological Experience', Medical Press of Western New York (January 1886), pp. 104-17. [26] Bederman, Manliness and Civilisation, p. 25. [27] Hortense J. Spillers, 'Mama’s Baby, Papa’s Maybe: An American Grammar Book', Diacritics, Vol. 17, No. 2 (1987), p. 72. [28] Robert T Morris, 'Is Evolution trying to do away with the clitoris?', The American Journal of Obstetrics and Diseases of Women and Children, Vol. 26, No. 180 (1892). [29] Henry William Flower, 'Account of the Dissection of a Bushwomen', Journal of Anatomy and Physiology, Volume 1 (Cambridge University Press, 1867), pp. 189-208; Ellis Havelock, 'Sexual inversion in women', Alienist and Neurologist, Vol. 16, No. 2 (1895); Stephen Jay Gould, The Mismeasure of Man (New York: Norton, 1981); Joseph William Howe, Excessive venery, masturbation, and continence: The etiology, pathology and treatment of the diseases resulting from venereal excesses, masturbation, and continence, New York: C.H Kerr, 108, Archives of Sexuality and Gender. In this article Howe argues that Black women are inherently more libidinous than their white counterparts. [30] Also, significant here is the fact that he observes that civilised people eyesight was generally less affective and responsive. Eyesight often became a way of trying to discriminate if a patient was malingering symptoms of hysteria, or being authentic. [31] Morris, 'Is Evolution trying to do away with the clitoris?', p. 847. Morris argues that signs of degeneration were evident in both sexes, but insists they were more pronounced and easily discernible in the case of women [32] Hecht, D'Orsay, 'Tabes in the Negro', The American Journal of the Medical Sciences, Vol. 126, No. 4 (1903), pp. 705-720. [33] Theophilus Powell, 'The Increase of Insanity and Tuberculosis in the Southern Negro since 1860, and its alliance, and some supposed causes', JAMA, (1986), Vol. XXVII, No. 23, p. 1185—1188. [34] Powell, 'The Increase in Insanity', p. 1187; D'Orsay, 'Tabes in the Negro', pp. 705-720. This work parallels Hecht D’Orsay’s argument that the “induction of civilised vices into uncivilised communities anew,” made uncivilised people insane. [35] 'Hysteria among Savages', Daily Picayune (11th January 1893), p. 4; Lucien Warner, A Popular Treatise on the Functions and Diseases of Women (New York: Manhattan Publishing, 1874), p. 88. This argument is almost identical in structure and phrasing to Lucien Warner’s definition of hysteria. [36] 'Social Hysteria', Portland Oregonian (13th May 1894), p. 4. Nineteenth Century U.S. Newspapers. [37] Ibid. [38] 'Negro Shouting', Detroit Free Press (18 Feb 1883). [39] Ibid. [40] Ibid. [41] Warner, A Popular Treatise, p. 88. Although Warner begins his entry on hysteria by acknowledging the role of conjecture in his account due to the ‘the absence of any statics’[41] on Black women with hysteria, he asserts with finality there can be no doubt that hysteria is more common than it was in the earlier history of our civilisation. [42] McCandless, 'A Female Malady', p. 549. [43] Ibid. [44] Ibid., p. 556. [45] Ibid., p. 543. For example, the wealth, whiteness, and social prestige of Mary Allston, who was committed to the South Carolina Lunatic Asylum in 1848, meant she had access to a private nurse, a special diet, and her own apartment. [46] S. Weir Mitchell, 'The True and False Palsies of Hysteria,' Medical News and Abstract, Clinical Lectures (Mar 1880), p. 38, 3. [47] Ibid., p. 129. [48] Ibid. Mitchell writes that he told Mrs C ‘it is absurd for a women of intellect to let one organ disorder the whole body.’ [49] Powell, 'The Increase of Insanity', p. 1186. [50] Louis, 'Black Women's', p. 41. [51] Ibid. [52] McCandless, 'A Female Malady', p. 554. Around the turn of the century, the average mortality rate for black patients (about 20 percent) was more than double the rate for Whites (around 9 percent).
- To What Extent is Ethnicity Important When Considering Englishness in the Twentieth Century?
During the twentieth century, England saw a dramatic increase in rates of immigration from parts of the Commonwealth, primarily due to the aftermath of the Second World War under the 1948 Nationality Act, allowing Commonwealth migrants to work unrestrictedly in the UK.[1] More than seventy years on, important questions remain unanswered surrounding the identity of these migrants and their children who are sometimes not viewed as English, although often born in England. Some scholars such as Schöpflin attribute these questions to inherent social class hierarchies within English identity and the migrants’ lack of status within them. However, most historiography roots the question of the migrants’ Englishness in the inherent xenophobia of the country. This xenophobia is largely the result of remnants of the Empire and the narratives created against migrants primarily to serve political motivations. It is worth noting here that ethnicity refers to the cultural identity (such as the language, history, and ancestry) of a group, whereas race refers to taxonomic groupings related to physical traits; both terms are relevant in this context however the key focus here will remain on ethnicity. Another term that will be used is ‘ethnically English’ which refers to people whose ancestry is rooted within England. Similarly, in the context of ethnic comparison from an imperial standpoint, the terms ‘Englishness’ and ‘Britishness’ may be used interchangeably to describe populations and characters often associated with both England and Britain. While some importance can be given to other factors such as inherent class structures and age within English identity, ultimately ethnicity has proven to be an internally uniting factor within the ethnically English people against the ‘alien’ others, both racially and ethnically different from the English. It is first important to note how social class hierarchies may have some importance within English identity. In Nations, Identity, Power, Schöpflin argues that ethnicity is not as important of a factor as class when examining Englishness and the ‘sentimentalised symbols of identity’ that come with it.’[2] Instead, as Kennard presents, ‘he sees ethnicity as represented by class, or class as the locus for cultural reproduction.’[3] He argues that class exists as a powerful inherent component of English society and shapes how people view their identity within it: ‘Class in England has survived because regardless of what people say, it has a role and function in cultural reproduction that is hidden from view by the power relations encoded in it. Despite appearances to the contrary, class allows people a very high degree of security regarding their identity.’[4] Interestingly, the use of the word ‘survived’ to describe class in England indicates an inherent resistance to it on some level that it must endure. Schöpflin argues that even though we say we want a classless society, we do not mean it. The inherent nature of class within English society makes it ‘hidden from view’ and therefore difficult to analyse and compare to the more explicit ethnic divisions within England. Writing in 2000, Schöpflin uses the previous century’s events surrounding ethnicity in South Africa (for example, Apartheid) and the United States (for example, the Civil Rights Movement) to make the comparison that England is unlike other countries in its preference of class over ethnicity. While he labels ethnicity and class as ‘functional equivalents’, he argues that countries such as South Africa and the United States function on a politically divisive standpoint, placing ethnicity at the forefront as a methodology of government whereas England uses class: ‘England is rare, subordinating ethnicity to class, and this has helped to make the country relatively open to migrants, exiles and other foreigners. They do not fit into the class system, at any rate not immediately, and they do not threaten it either […] The fact that blacks perceive the class system as racist is irrelevant in this context.’[5] Although Schöpflin makes some curious insights into the reasons behind England’s seemingly ‘open’ immigration policy, he overlooks and essentialises difficulties facing the ‘migrants, exiles and other foreigners’ upon arrival in England, many of which stem from their foreign ethnicities. This idea is further cemented in his disregard of ‘blacks’ and their perception of a racist class system to be ‘irrelevant’. Additionally, Schöpflin highlights the role of the European Union as the antithesis to English hierarchical class structures, commenting on how ‘new forms of knowledge, new concentrations of social capital, new definitions of status can all be derived from EU membership.’[6] Writing in 2000, Schöpflin’s notion has since been somewhat disproven through the 2016 Brexit Referendum. While Schöpflin argues that class would unite the population against the threat of the E.U., the statistics from the Brexit Referendum demonstrate that the population was largely divided by age with over 70% of 18–24-year-olds voting to remain, despite the threat that the E.U. supposedly presents to the English class system.[7] Although the Vote Leave campaign highlighted many problems surrounding the economy and sovereignty that the UK supposedly faced while in the E.U., Gietel-Basten emphasises that primarily, ‘it was about gut-wrenching issues like borders, culture, and the homeland.’[8] The eventual exit of the United Kingdom from the European Union was the result of Britons wanting to ‘take back control’, with immigration being the ‘single strongest issue driving people to vote Leave.’[9] Combining this idea with the influence of age, one could suggest that age potentially plays a factor in one’s aversion to other ethnicities and immigration. Schöpflin’s argument ignores the influence of ethnicity as well as class within English consciousness and trivialises the multifaceted intersections of class and ethnicity. As Aughey suitably states: ‘The distinction made by Schöpflin between high-status ‘class’ and low-status ‘ethnicity’—explaining the growth of ‘feeling English’ as the displacement of the former by the latter—is far too simplistic. It ignores the complex intermingling of ideas that constitute a national identity.’[10] Aughey furthers his critique and asserts that ‘for now it is the class consciousness in Englishness which acts as a brake on nationalism for England.’[11] This idea claims that Schöpflin not only disregards ethnicity as a factor but also attributes Englishness to its own obstacle through the inherent divisive qualities within the class system amongst ethnically English people. Instead, common ethnicity offers a uniting influence to construct large aspects of English nationalism and identity. One reason for this is a result of the enduring memory of the British Empire and notions of racial superiority that were bolstered during its time, such as those presented in ‘The White Man’s Burden’ by Rudyard Kipling.[12] As Kumar states, ‘English nationalism, past and present, is the nationalism of an imperial state—one that carries the stamp of its imperial past even when the empire has gone.’[13] Many scholars examine the idea of migrants from the Commonwealth being tools for England instead of citizens of equal standing, particularly after the First and Second World Wars. Kumar continues, ‘There was a persistence of Empire until the second half of the century. There were two world wars in which all nations of the United Kingdom, the Empire and the Commonwealth fought side by side, and in which any insistence on English nationalism would have been as dangerous as it would have been distasteful.’[14] During the global wars of the twentieth century, much of the British Army was formed of soldiers from territories within the Empire. One prominent example of the ethnic diversity of the British Army is the Fourteenth Army, ‘which consisted of Indians from every corner of the Raj, Gurkhas from Nepal, Kenyans, Nigerians, Rhodesians and Somalis, as well as men from Kent and Cumberland.’[15] Likewise, after the war, hundreds of workers from the then-British Caribbean were invited to migrate to Britain to fill labour shortages. However, the ensuing deportation of these migrants due to poor document management by the British government demonstrates a disposable mentality towards migrants in post-war Britain.[16] Collins articulates that ‘they were sojourners, not citizens, and from this perspective they occupied a clear position within the racial framework of the Empire-Commonwealth; they belonged to the colonial world, not the metropolitan.’[17] From this notion, one could argue that English nationalism was particularly bolstered in the post-Second World War era due to a combination of patriarchal pride from the war itself, as well as the unification of Britons against invaders challenging Britain as they know it, presented in both Axis countries as well as British subjects from overseas. As Colley summarises, the British ‘defined themselves, in short, not just through an internal and domestic dialogue but in conscious opposition to the Other beyond their shores.’[18] From this, one could argue that English national consciousness grew as a result of the simultaneously growing ‘Other’. The 1960s onwards saw a sharp increase of Commonwealth immigrants in England, principally East African Asians fleeing persecution from Kenya and Uganda from the late 1960s to the mid-1970s due to exclusionary nationalist policies in the newly independent nations.[19] Despite being born abroad, many of these migrants possessed British passports, leaving the logical option to flee to Britain. Of course, this influx of migrants resulted in a strong backlash from the British population and government figures. A noteworthy example is Enoch Powell’s infamous ‘Rivers of Blood’ speech delivered in 1968, which purportedly sought to report the shocking state of the country for the ‘quite ordinary working man’ due to the rise of immigration.[20] Although it received a colossal response from both opposers and supporters, the speech was only a fraction of racialised politics in England.[21] Manifestos containing policies to control immigration were common in the twentieth century such as the 1971 Immigration Act under the Conservative Party which ‘sought to effectively end primary immigration from the Commonwealth.’[22] Stocker also notes how this act ‘also introduced the notion of ‘partiality’, which discriminated in favour of immigrants from Australia, Canada and New Zealand over the rest of the (largely non-white) Commonwealth.’[23] Stocker raises the point that anti-immigration policies such as the 1971 Immigration Act, may be centred more around race than merely ethnicity and, in other words, reveals an aversion to a difference in physical traits rather than cultural identities. It is worth noting the debate surrounding notions of race and ethnicity within historiography, particularly their origins and purpose. While convention places these concepts to be natural taxonomic groupings with which to categorise the population, many question their origins and purpose and argue that they are principally concepts used in tyrannical political narratives. Tabili labels race to be ‘a historical artifact’ and argues how ‘definitions of racial difference, like masculinity and femininity, have been sensitive to economic and political change, mediated by class and gender, and manipulated by elites in the pursuit of power.’[24] Similarly, Smith notes how ethnicity is used “instrumentally’ to further individual or collective interests, particularly of competing élites who need to mobilize large followings to support their goals in the struggle for power. In this struggle ethnicity becomes a useful tool.’[25] This idea is visible in a 1978 newspaper advert, supposedly promising equality for black people in Britain under the Conservative government.[26] Yet, in the same way that people from colonial territories were used as tools to fight and rebuild after the wars, their ethnicity and existence in England proved to be the greatest political tool to also unite the majority of the nation against them under the (often Conservative) Party. In a 1978 interview for Granada TV with Gordon Burns, Margaret Thatcher remarked that ‘if you want good race relations, you have got to allay people’s fears on numbers.’[27] These ‘fears on numbers’ were also the primary catalyst in the 2016 Brexit Vote Leave campaign’s victory, demonstrating how politics uses and incites a great deal of enduring xenophobia in the UK. Another remnant from the age of the British Empire that encourages xenophobia is the belief in the existence of an ‘English character’ which is assumed to contain a set of characteristics surrounding democracy, imperialism, and defence. In the same interview for Granada TV, Thatcher expressed how ‘the British character has done so much for democracy, for law and done so much throughout the world that if there is any fear that it might be swamped, people are going to react and be rather hostile to those coming in.’[28] This formation of a British character working in parallel to the rest of the world and then endowed to the colonies encourages an exclusivity of the character for ethnically English people. This, in turn, extends to England itself and its land only being for those whose ethnicity reflects the ‘British character’. Similarly, Collins notes reflections of this exclusivity in the specific cultural identifier of cricket and how it ‘was used to re-articulate Englishness as culturally distinct and unobtainable to the immigrants and the formerly colonised subject.’[29]This establishes the exclusive nature of Englishness and its qualities being somewhat ‘gatekept’ from the formerly colonised, whether it takes the form of smaller cultural identifiers such as cricket or larger ones such as the land of the country itself. Unless needed, immigration in England has been largely treated with contempt by the public and government and is often represented by a collection of statistics which those in power seek to lower. The influence of the Empire is often overlooked when considering the stories of migrants and, as Stocker claims, ‘the crimes committed in its Empire are conveniently forgotten and the xenophobic attitudes which have run through British society for centuries are generally ignored.’[30] Britain’s involvement in the histories of many nations is often omitted when addressing immigration, leading to confusion from the public surrounding the migrants’ presence in England which in turn creates the xenophobia and ignorance towards them. Kellas makes an interesting comparison of the United Kingdom to the United States, stating how ‘Britain is also less receptive to multi-culturalism, seeing itself not as a ‘melting-pot’ of immigrants, but as an old-established ‘nation-state’ essentially English in character.’[31] The desire to hold on to this English character forms the basis for the idea of Englishness which those of different ethnicities are intentionally left out of. It is worth mentioning, however, that former Prime Minister David Cameron made some attempt to address this in a 2014 article, highlighting ‘British values.’ In this list, he names ‘a belief in freedom, tolerance of others, accepting personal and social responsibility,’ and ‘respecting and upholding the rule of law’.[32] Although the article was largely written in response to reports of Islamic extremism in schools, the report also mentioned teaching Britain’s history in schools ‘warts and all’, suggesting a break from the convenient forgetting of the history of the Empire. Nine years on, however, Britain’s imperial history is not compulsory to be taught in schools meaning the ignorance of the migrants’ stories continues along with the spread of xenophobia and the exclusionary nature of Englishness. In conclusion, ethnicity is very important when considering Englishness as it has formed part of the cultural identity of Britain, creating a national consciousness in opposition to others. While class plays a factor in perpetuating divisive norms within British society, ethnicity and the racism and xenophobia it can come with play a bigger role in uniting the country against ‘alien’ peoples. This is largely due to the overlooking of the support from migrant workers and soldiers in the twentieth century, along with the absence of widespread education on the extent of injustices in Britain’s imperial history. This same imperial history brought about the notion of a ‘British character’ encompassing democracy and superiority on the global stage which has been referenced by political leaders as a further uniting tool. While Thatcher’s usage surrounded a uniting tool against immigration, Cameron used the term to unite all people in Britain against extremism, suggesting a level of inclusion of migrants in the UK in the unification against a new enemy. Japneet Hayer has recently completed an MA in History at the University of Nottingham. Notes: [1] A. M. Messina, ‘The Impacts of Post-WWII Migration to Britain: Policy Constraints, Political Opportunism and the Alteration of Representational Politics,’ The Review of Politics, 63/2, (2001), p. 263. [2] G. Schöpflin, Nations, Identity, Power (London, 2000), p. 320. [3] A. Kennard, ‘Review: Nations, Identity, Power,’ Scottish Affairs, No. 38 (2002), p. 137. [4] Schöpflin, Nations, Identity, Power, pp. 311-2. [5] Ibid., p. 317. [6] Ibid., p. 319. [7] D. Walker, ‘How young and old would vote on Brexit now,’ BBC (2018), , accessed 12/05/2023. [8] S. Gietel-Basten, ‘Why Brexit? The Toxic Mix of Immigration and Austerity,’ Population and Development Review, Vol. 42, No. 4 (2016), p. 678. [9] A. Garrett, ‘The Refugee Crisis, Brexit, and the Reframing of Immigration in Britain,’ Europe Now (2019),https://www.europenowjournal.org/2019/09/09/the-refugee-crisis-brexit-and-the-reframing-of-immigration-in-britain/, accessed 28/04/2023. [10] A. Aughey, ‘Englishness as class: A re-examination’, Ethnicities, Vol. 12, No. 4 (2012), p. 402. [11] Ibid., p. 405. [12] Rudyard Kipling, ‘The White Man’s Burden’, The Times, (London, 1899). [13] K. Kumar, ‘Nation and Empire: English and British National Identity in Comparative Perspective’, Theory and Society, Vol. 29, No. 5 (2000), p. 577. [14] Ibid. p. 592. [15] A. Jackson, The British Empire and the Second World War (London, 2006), p. 2. [16] Akala, ‘The Great British Contradiction’, RSA Journal, Vol. 164, No. 2 (2018), p. 19. [17] M. Collins, ‘Cricket, Englishness and Racial Thinking,’ The Political Quarterly, Vol. 93, No. 1 (2022), p. 98. [18] L. Colley, ‘Britishness and Otherness: An Argument’, Journal of British Studies, Vol. 31, No. 4 (1992), p. 316. [19] J. Portes, S. Burgess, J. Anders, ‘The long-term outcomes of refugees: tracking the progress of the East African Asians,’ Journal of Refugee Studies, Vol. 34, No. 2 (2020), p. 3. [20] E. Powell, ‘Rivers of Blood’ speech (delivered in Birmingham, 1968), , accessed 28/04/2023. [21] R. Shepherd, Enoch Powell: A Biography (London, 1996), p. 353. [22] P. Stocker, English Uprising (London, 2017), p. 54. [23] Ibid., p. 54. [24] L. Tabili, ‘The Construction of Racial Difference in Twentieth-Century Britain: The Special Restriction (Coloured Alien Seamen) Order, 1925,’ Journal of British Studies, Vol. 33, No. 1 (1994), p. 59. [25] A. D. Smith, National Identity (London, 1991), p. 20. [26] ‘Labour Says He’s Black. Tories Say He’s British’ Advert, Saatchi & Saatchi (1978). [27] TV Interview for Granada World in Action (1978), , accessed 28/04/2023. [28] Ibid. [29] Collins, ‘Cricket, Englishness and Racial Thinking,’ p. 96. [30] Stocker, English Uprising, p. 18. [31] J. G. Kellas, The Politics of Nationalism and Ethnicity (London, 1991), p. 105. [32] D. Cameron, ‘British Values’, Mail on Sunday (2014) , accessed 30/04/2023.
- Why Historians Should Study the Explosion of Vernacular Literature in the Late Middle Ages
Studying the explosion of vernacular literature is fundamentally important to historians of the Middle Ages because it informs them about contemporary political, social, and cultural developments. Vernacular literature developed as a form of expression belonging to the emerging merchant class and the literate, meaning that literature was no longer the preserve of the Latin-reading rich and monastically trained monks, leading to an increasingly diverse and secular audience. Through the ‘popular lens’ of the vernacular, historians can examine contemporary explorations and critiques of society, allowing them to see perspectives beyond those that were rich enough to learn Latin. It must be noted, however, that this was not a literary ‘revolution’, as books remained expensive regardless of the language, meaning that the explosion of vernacular increased authorship and readership, but not financial accessibility. Vernacular literature gave more women a voice, allowing historians to integrate them into the narrative. It was also immensely political, often becoming a vehicle for criticism and even rebellion, whereas Latin remained the language of authority. Vernacular writing allowed religious and philosophical discussion beyond the nobility and clergy, enabling new ideas to spread and challenge orthodox beliefs. It is, therefore, essential for medieval historians to study the explosion of vernacular literature, as it offers an insight into the minds of contemporaries and tells us so much about the Middle Ages. The explosion of vernacular writing suggests that an increasing number of people were literate, implying that books and their ideas were no longer the preserve of elites or monastics. In the 14th century, it is estimated that up to fifty per cent of the male population could read, suggesting sizable literary circulation.[1] However, this is not fully reflective of the literate population, as people engaged with books in different ways; for example, now that texts were in the vernacular, those who had books read aloud to them could understand a growing repertoire. Vernacular education manuals survive, such as one preserved from the 15th century, suggesting growing literacy and, therefore, increasing social mobility in the Middle Ages.[2] Further, Books of Hours began to be published in vernaculars, and it has been argued that their popularity points to the expanding literacy of the laity and the growth of a vibrant reading culture.[3] It is true that these works were extremely expensive, often heavily decorated and considered status items, meaning that they remained unaffordable for many. However, they still show the literary expansion, not revolution, that took place during the Middle Ages, meaning that they are essential sources for understanding the period. The vernacular allows for more informal writing, including critiques of society, which are valuable to historians. Some have suggested that in The Canterbury Tales, the parson, knight, and ploughman represent the Three Estates of society, giving historians insight into the social hierarchy as perceived by contemporaries.[4] Further, the Miller’s satirical tale counters the aristocratic romance of the knight’s, creating a conflict between their classes that reveals broader resentment at social stratification.[5] During his prologue, the eponymous Miller refuses to let the monk tell his story first and is in “no mood for manners or to doff”.[6] This can be read as a critique and satire of the social hierarchy, and shows historians how contemporaries interpreted the order they were stratified within. Thus, vernacular literature is used to represent and satirise highly stratified medieval society, and is useful to historians who want to understand contemporary society and an increased range of people’s mindsets. The vernacular also gave a new voice to women, telling us about contemporary patriarchal social relations and how some women countered them. Chaucer’s The Wife of Bath satirises the Wife’s five marriages, and she is identified only by her relations to men, perpetuating misogynistic stereotypes and allowing historians to examine contemporary attitudes towards women.[7] On the other hand, the writings of Christine de Pizan give historians a rare insight into the life and mind of a woman in the Middle Ages without the lens of a male writer. Some have argued that her change to French is influenced by a desire to create courtly literature specific to France, allowing historians to examine contemporary attitudes at the French Court.[8] However, some could argue that writing for the Court restricted her audience to the elites (who were Latin-educated anyway), and was an aesthetic choice, rather than a desire to expand her readership. Despite this, the vernacular increased her audience, allowing non-Latin-educated people beyond the Court to read her works, and countered prevailing misogynistic stereotypes to all audiences. In The Book of the City of Ladies, she tackles misogyny by offering an alternative view of history in which women’s contributions are fully recognised and writes that “God has never criticised the female sex more than the male sex”, equating men and women within a universally understood Christian context that sparked debate, showing that vernacular literature gave medieval women a voice and allowed them to counter prevailing sexism.[9] However, some historians argue that Pizan’s writings are somewhat conservative, as they almost exclusively discuss aristocratic women.[10] Despite this, her other works discuss a greater social range; The Treasure of the City of Ladies has advice for everyone, from “princesses” to “prostitutes”, so that “everyone may benefit” from its advice; Pizan, the first professional female writer, writes for all women, even if they cannot read her texts.[11]Therefore, the vernacular literature that exploded in the Middle Ages gives women a voice and allows historians to see what life was truly like for them, re-integrating them into the historical narrative. The explosion of vernacular literature allows historians to explore medieval politics through the lens of someone ‘ordinary’ who participated in them. It tells us that medieval people were politically engaged; indeed, the ‘explosion’ even implies an increase in political engagement. Latin is often seen as the language of authority, and vernacular as the language of the people; thus, vernacular writing is a record without the authorities’ agenda embedded into it. Indeed, using the fact that indictments were submitted in Latin but evidence was in English, Helen Wicker argues that the shift from Latin to English creates a political “tension” in the sources that shows the contrast between popular politics and authority.[12] Further, she posits that vernacular development was policed to avoid treasonous language, showing historians the power of language in the Middle Ages.[13] The choice of vernacular may be because, as Dante himself writes, it is perceived to be “the more noble” because “the whole world employs it”.[14] Therefore, we could argue that Dante and his contemporaries saw the vernacular as the “more noble” language because it allowed universal, not limited, political expression.[15] Indeed, The Divine Comedy is inherently political, making it a crucial source for understanding Florentine politics. Some historians argue that it is a “constant comment on the sinful greed of the mercantile class”, showing how Dante uses the vernacular to criticise the growing merchant class.[16] Indeed, in Paradiso, Dante writes that a group of wealthy people are “destroyed by their own pride!”, a thinly veiled political critique, perhaps aimed at the Medici and similar families.[17] This gives historians a window into 14th-century Florentine politics, allowing them to examine socio-political perspectives in a way that they could not have in an official Latin text. The vernacular also allowed an expression of national identity, which was an increasingly contested political debate in medieval Europe. Benedict Andersen argues that the decline of Latin and the growth of the vernacular gave rise to ‘nationalist’ sentiments across the world, emphasising the power of print media in shaping someone’s social psyche.[18] Andersen’s work focuses on a slightly later time period, but his analysis can still inform our understanding of the Middle Ages; indeed, other historians have argued that a “nationalist discourse” ran through some medieval literature.[19] Whilst it may be anachronistic to apply ‘nationalist’ to the Middle Ages, ‘national consciousness’ is perhaps more fitting, and is evident in Scottish literature. In The Bruce, John Barbour writes that the Scots were “in bondage” to the English, and that Robert the Bruce’s “bravery” gave other people courage to fight them.[20] Thus, he uses vernacular literature to stir up national pride, patriotism, and hatred of the English. This is useful to historians as it provides a valuable source for Robert’s reign, and shows how people felt about the time about Edward II’s attempted invasion. The explosion of religious and philosophical vernacular texts also allowed people to understand and debate ideas themselves, challenging the hegemony of the Latin Church and showing historians that religious innovation could be fuelled by the vernacular. The rediscovery of Aristotle’s writings (through translations from Arabic) detracted from Biblical studies, showing how the vernacular could introduce new ideas to new audiences and challenge conventional beliefs.[21] Further, when John Wycliffe translated the Bible into English, it became the most widely disseminated medieval English text, allowing more people to read Scripture and form their own direct relationships with God.[22] This was fiercely opposed by the Catholic Church, who perceived it as a threat to the orthodox teaching of priests as intercessors between the worshipper and God. It is true that the translation was scholarly, and arguably not intended for the masses, but the furore it created in the Catholic Church, and the Lollard movement it inspired, speak to its effectiveness at sparking religious debate.[23] This shows historians how vernacular religious works could take on lives beyond their original purposes to create arguments and change society, showing the importance of studying the explosion of vernacular literature. Some scholars argue that Dante explores new theological ideas, with the “graded descent” of Inferno differing from traditional conceptualisations of Hell as “unbridled disorder”.[24] His stratification of Hell is more nuanced than contemporary notions of evil, implying that there are different levels of sin and sinners, and that not every bad person is inherently totally evil.[25] This use of the vernacular allows authors to express religious ideas to wider audiences, and allows historians to track the development of theological innovation. Boccaccio also uses the vernacular to discuss religion, instead criticising the “guzzling hypocrisy” of the Church.[26] The protagonist of this tale, a wealthy man, tells an Inquisitor that “for every one” bowl of broth the poor drinks, “you shall receive a hundredfold”, implying that the Friar would drown in the broth due to his (and by extension, the Church’s) greed.[27] This critique of the Church was extraordinarily bold, and the vernacular allowed more people to read it, sparking debate over the Church’s corruption. Thus, the vernacular becomes a vehicle for religious satire and criticism, and a historian studying the Middle Ages can now see evidence of popular religious commentary and criticism. Ultimately, the study of the explosion of vernacular literature in the Middle Ages is of a vital importance to any historian studying the intellectual, cultural, political and social history of the time. This is because it tells us so much about medieval society, how people moved around within its structure, how people criticise it, and reveals an increasingly literate, engaged, and secular readership. It also gives a voice to women, as more can now express themselves more freely, letting historians appreciate their contributions to literature and society. The development of the vernacular was inherently political, as it contributed to growing national identities and exposed the tension between the Latin of authority, and the languages of the masses, revealing contemporary popular political opinions. Religious and philosophical debate was expanded with the vernacular, as more people could now engage with fundamental questions; this expanded the reach of these subjects from the Latin-learned clergy and opened them up to anyone with access to a book. This is not to say that the vernacular caused a literary ‘revolution’, as books remained extremely expensive and elite objects. Rather, it started the development of mass literacy and literary engagement, and opened up previously elite subjects to new audiences. Therefore, historians should absolutely study the explosion of vernacular writing in the Middle Ages because it tells us so much about the political, social, and cultural changes in the medieval world. Callum Tilley has just completed his 1st year of a BA in History at Durham University (University College) Notes: [1] Laurel Amtower and Jacqueline Vanhoutte (eds.), A Companion to Chaucer and his Contemporaries (Toronto, 2009), p. 304 [2] Ibid., pp. 329-331 [3] Kathleen Kennedy, ‘Reintroducing the English Books of Hours, or “English Primers”’, Speculum, Vol. 89, No. 3 (July 2014), p. 695; Ibid., p. 719 [4] Amtower and Vanhoutte, Companion to Chaucer, p. 70 [5] James Simpson (ed.), The Norton Anthology of English Literature, tenth edition: The Middle Ages (New York, 2018), p. 282 [6] Geoffrey Chaucer, The Canterbury Tales (London, Ed. 2003), p. 70 [7] Amtower and Vanhoutte, Companion to Chaucer, p. 81 [8] Jane Hall McCash, ‘The Role of Women in the Rise of the Vernacular’, Comparative Literature, Vol. 60, No. 1 (Winter, 2008), p. 47 [9] Christine de Pizan, The Book of the City of Ladies, trans. Rosalind Brown-Grant (London, Ed. 1999), p. 87 [10] Rosalind Brown-Grant, Christine de Pizan and the Moral Defence of Women: Reading Beyond Gender (Cambridge, 1999), p. 129 [11] Christine de Pizan, The Treasure of the City of Ladies, trans. Sarah Lawson (London, Ed. 2003), p. 5; Ibid., p. 158; Ibid., p. 154 [12] Helen Wicker, ‘The Politics of Vernacular Speech: Cases of Treasonable Language, c. 1440-1453’, in Helen Wicker and Elizabeth Salter (eds.), Vernacularity in England and Wales, c. 1300-1550 (Utrecht, 2011), pp. 173-174 [13] Ibid., p. 173 [14] Dante Alighieri, De vulgari eloquentia, ed. and trans. Steven Botterill (Cambridge, Ed. 2009), p. 3 [15] Steven Botterill has argued this; see ‘Introduction’ in De vulgari eloquentia, p. xxiii [16] Stanley Chandler and Julius Molinaro, The Culture of Italy; Medieval to Modern (Toronto, 1979), p. 54 [17] Dante Alighieri, The Divine Comedy, trans. Robin Kirkpatrick (London, Ed.2012), p. 397 [18] See Benedict Andersen, Imagined Communities (London, Ed. 2006) [19] Roderick Lyall, ‘The Literature of Lowland Scotland, 1350-1700’, in Paul Scott (ed.), Scotland: A Concise Cultural History (Edinburgh, 1993), p. 77 [20] John Barbour, The Bruce, trans. George Eyre-Todd (Glasgow, Ed. 1907), p. 7; Ibid., p. 124 [21] George Gordon Coulton, Medieval Panorama: The English Scene from Conquest to Reformation (Cambridge, 1939), p. 412 [22] Elizabeth Solopova (ed.), The Wycliffite Bible: Origin, History and Interpretation, (Leiden ,2016), p.1 [23] Ibid., p. 2 [24]Robin Kirkpatrick, ‘Introduction’, in Dante, The Divine Comedy, p. xviii [25] See Dante Alighieri, The Divine Comedy, trans. Robin Kirkpatrick (London, Ed. 2012), p. 1 - Inferno [26] Giovanni Boccaccio, The Decameron, trans. Wayne Rebhorn (New York, 2013), p. 56 [27] Ibid., p. 56