Search Results
59 results found with an empty search
- Was the revolt in Zanzibar of 1964 determined by race or ideology?
The argument that the revolt in Zanzibar of 1964—the Zanzibar Revolution—was determined by a sole determinant, in a binary choice between race and ideology, is a specious one. Whilst it is easy to interpret the events that ignited the Zanzibar Revolution as one of racial animus, between that of the dominant Arab and South Asian minority against the Afro-Shirazi majority, or through Cold War ideological interpretation, that of Marxist uprising, these notions are reductive and do not represent the revolution’s genesis in its entirety. To understand the determination of the Zanzibar Revolution wholly, the politicisation of ethnicity, cognisant of the prevailing socioeconomic conditions, and the paradox of blurring race within ideology—racial nationalism—must also be considered. This paper asserts that it was the inherent socioeconomic disparity, that transcends both race and ideology, that provided the incitation for the 1964 revolution. Racial odium was an incontrovertible determinant in the inducement of Zanzibari revolutionary sentiment. Zanzibar, during Omani rule in the nineteenth-century, developed into a formidable mercantile state underpinned by a slave-plantation economy[1] - “a vile center for slavery”[2]. Arabs and South Asians emerged as the ruling classes, as landowners and merchants respectively, and the indigenous Shirazi and African mainlanders as the peasantry and enslaved.[3] The alienation of indigenous populations and the exacted transportation of slaves from the African mainland enforced the “Arabization” of Zanzibar, entrenching racial divisions in an ethnic hierarchy—a “racial state”.[4] Glassman asserts that British colonial rule demarcated sociopolitical identities in “fixed, mutually exclusive terms that fetishized notions of racial and ethnic ‘purity’”.[5] To illustrate, despite the abolition of slavery in 1897[6], vituperative Arab attitudes towards Africans were anchored in a transitioned, squatter-economy reinforced by a colonial “racial paradigm...that tended to label population by race, and race [by] function”.[7] In this view, the Arabs were the landowning elites, the South Asians the merchant class, and the Africans “the downtrodden”.[8] The racial inequality in cosmopolitan Zanzibar naturally manifested itself, through democratisation, as politicisation of ethnicity—“regenerating complex” racial and ethnic nationalisms[9]—where the distinction between race and ideology is blurred. The formation of the political parties in the Zama za Siasa[10][i]—the predominantly-Arab Zanzibar Nationalist Party (ZNP) and the Afro-Shirazi Party (ASP)—to espouse their respective racial interests, is fundamental in determining the incitation of the Zanzibar Revolution. For instance, the ASP united the ethnic groups of the African mainlanders and indigenous Shirazi, under the ideology of African nationalism, to crusade against the sociopolitical dominance of Arab hegemony which had used cosmopolitanism as a “deceptive façade for cultural chauvinism and racial injustice”.[11] Racial nationalism also served to unite the sectional ethnicities within them: Arab sects including the Bohora and the Ismaili populations[12] united under the ZNP, the various African ethnicities—the “Swahili, Hadiumu, and Shirazi”[13]—allied with the ASP. It must be noted, however, the ZNP’s racial nationalism was furtive, disguised by a ‘multiracialist’ interpretation of nationalism as to win the allegiance of other racial communities, whilst “practicing racialism” to preserve Arab interests.[14] Fouéré asserts that it is this racial nationalism, “re-appropriated” by the ZNP and ASP in the Zama za Siasa, which culminated in the revolution.[15] Prerevolutionary racial nationalism ascribes to Glassman’s notion of an ‘exclusionary national categorical order’, in which society was apportioned into “mutually exclusive identities” along ethnic parameters, creating the premonition of racial dehumanization—’us against them’.[16] By politicizing ethnicity, racial nationalism and dehumanization—fueled by ubiquitous, virulent anxieties—culminated in the overtly ‘genocidal’[17] violence against the Arab-minority by those African revolutionaries engendering retribution. It can, thus, be evidenced that indelible racial animus, predating colonial rule in origin, intensified by its presence, underpinned the revolutionary sentiment that determined the Zanzibar Revolution. Before examining the significance of ideology in the Zanzibar Revolution, the paradox of amalgamating race within ideology, given the binary argument that it was either race or ideology that determined its impetus, must be contemplated. Racial nationalism is an ideology in the sense that it espouses the view that national identity is defined by race, but is inherently inseparable from it. This paradox is important to note given that other ideologies—namely Marxism—within in the Cold War context of the revolution, affected its instigation. In this view, racial nationalism is a racial determinant of the Zanzibar Revolution, not an ideological one. The importance of ideology as a determinant in the Zanzibar Revolution varies significantly, as does the heterogeneity of the ideologies at play: Marxism, Zanzibari nationalism, authoritarianism, and Pan-Africanism. The ASP espoused racialised African nationalism[18], with Pan-Africanist sentiment, evident in their support for Tanganyikan President Julius Nyerere[19]; the ZNP advocated a Zanzibari nationalism that “subsumed all divisions of race”[20], centering loyalty on an Islamic interpretation of the Swahili tradition of “ustaarabu”—’civilization’— advocating “siasa si ngoma”,the notion that politics does not equate to ethnicity[21]; and the Umma Party which staunchly endorsed Marxist socialism and, as Burgess identifies, held ‘Mao and Stalin to possess the answers’ to correct Zanzibar’s history of “underdevelopment and inequality”.[22] Radical socialists and Marxists had been crucial in establishing the ZNP in 1957, such as A.M. Babu, who sought to position the party within Cold War ‘binary confrontations of socialism and capitalism’.[23] This radical sect seceded in 1963 to establish the Umma Party which “espoused socialism as its official creed”.[24] Fouéré notes how intellectual Zanzibari socialists were shaped by the nation’s historic economic disparity, the emergent Pan-Africanist and nationalist sentiments, to assume a Marxist identity in the collapse of anti-colonial ideologies, reactionary to increasing ZNP “authoritarianism”.[25] Whilst the Umma Party did not ‘engineer’ the revolution, with Babu himself identifying the revolution as originating as merely “a lumpen uprising by angry...frustrated urban youths who had simply aimed to burn down the city of Zanzibar”[26], Hwang argues, it did aid its transformation into a “socialist revolution”.[27] This view is corroborated by Speller who identified the replacement of a “conservative Arab-dominated regime with one that espoused the principles of African nationalism and radical socialism”.[28] The entrenched sociopolitical and economic disparity between the Arab and Afro-Shirazi populations conveniently fitted Marxist rhetoric, coupling “African nationalist discourse of racial grievance” with socialist thinking, which Burgess posits, justified the “assault on the wealth and exclusivity”’ of the Arab and South Asian communities in the Zanzibar Revolution.[29] This demonstrates that it was not an exclusive ideology, but a combination of two—radical socialism and Pan-Africanism—that shaped revolutionary thinking in determining the Zanzibar Revolution against the “petit-bourgeoise”[30] of the ZNP intelligentsia. As has been considered, historians generally attribute the cause of the revolution to either race or ideology, which oversimplifies the socioeconomic, political, and ‘racial-class’[31] complexities inherent in 1960s Zanzibar. Assessing both the rhetoric and discourse of racial nationalists and radical socialists illuminates the fact that they sought a mutual objective: the overthrow of Arab domination or, more precisely, an overhaul of the class system that permeated ethnic racial divisions and disparities. There is academic agreement on the view that the disparity and polarization between classes, along ethnic lines, transcended both race and ideology in determining the Zanzibar Revolution. Speller highlights the concentration of “land, wealth, and political power”[32] in the Arab community and the monopoly on business enjoyed by the South Asians as causing prominent ethnic socioeconomic inequalities—a ‘mixed racial-class division’.[33] Burgess draws on the thinking of Babu and his radical socialist following which identified that Zanzibar was not divided between Arabs and Africans, but “capital and labor”.[34] In this view, the subjugation of the Afro-Shirazi majority by the dominant Arab-South Asian elite was the fundamental justification for the Zanzibar Revolution, with race and ideology merely differing vehicles in which to understand the existent socioeconomic disparity. Racial or ideological determinants of the Zanzibar Revolution—be it a socialist revolution or a Pan-African uprising—are subsumed by the overarching, universal revolutionary motivation to correct the immanent domination of Arabs over the Afro-Shirazi. Racial and ideological accounts as to the determination of the Zanzibar Revolution are convincing but superficially plausible. It is an undeniable fact that race played a climacteric role in inciting a violent rebuttal of Arab hegemonic dominance over Zanzibari society. This is true, also, of the indubitable influence ideology—both Marxist and Pan-Africanist—had on mobilizing and justifying the instigation of the revolution. This does not, however, explain the impetus to racial odium or ideological antagonism present in January 1964. The essence of the revolution was embedded in the generational subjugation of Africans, both indigenous and mainlander, by the dominant Arab elite, nascent in the period of Omani rule and intensified by colonial interference. This feudalism established a virulent socioeconomic class division, transmuting racial or ideological enmities. The comprehensive determination for the Zanzibar Revolution was to reverse the structural class disparity which, through the emergence of racial nationalism and Marxist theory, were translated into either racial or ideological accounts, opposed to one of an idiosyncratic Zanzibari situation. Will Kingston-Cox is currently in his 3rd year of a BA in History and Politics at the University of Warwick. Notes: [1] Jonathon Glassman, 'Sorting out the Tribes: The Creation of Racial Identities in Colonial Zanzibar’s Newspaper Wars', Journal of African History, 41 (2001), p.402 [2] Don Petterson, Revolution in Zanzibar: An American’s Cold War Tale (Boulder, Colorado: Westview Press, 2002), p. 4 [3] Abdul Sheriff, ‘Race and Class in the Politics of Zanzibar, Africa Spectrum, 36(3) (2001), p. 301 [4] Jonathon Glassman, 'Sorting out the Tribes: The Creation of Racial Identities in Colonial Zanzibar’s Newspaper Wars', Journal of African History, 41 (2001), p.402 [5] Ibid, pp.396-397 [6] Don Petterson, Revolution in Zanzibar: An American’s Cold War Tale (Boulder, Colorado: Westview Press, 2002), p. 12 [7] Abdul Sheriff, ‘Race and Class in the Politics of Zanzibar, Africa Spectrum, 36(3) (2001), p. 301 [8] Ibid, p.301 [9] Kyu-Deug Hwang, ‘Revisiting the Politics of Zanzibar: In Search of the Root Causes of the 1964 Revolution’, International Area Review, 12(2) (2001), p. 30 [10] Abdul Sheriff, ‘Race and Class in the Politics of Zanzibar, Africa Spectrum, 36(3) (2001), p. 310 [11] G. Thomas Burgess, Race, Revolution, and the Struggle for Human Rights in Zanzibar: The Memoirs of Ali Sultan Issa and Seif Sharif Hamad (Athens: Ohio University Press, 2009), pp. 19-20 [12] Ibid, p. 17 [13] Ibid, p. 17 [14] Kyu-Deug Hwang, ‘Revisiting the Politics of Zanzibar: In Search of the Root Causes of the 1964 Revolution’, International Area Review, 12(2) (2001), p. 28; see Johannes Mosare, “Background to the Revolution in Zanzibar”, A History of Tanzania (1969), 230-231 [15] Marie-Aude Fouéré, ‘Reinterpreting Revolutionary Zanzibar in the Media Today: The Case of Dira Newspaper’, Journal of Eastern African Studies, 6 (2012), pp.679-680 [16] Jonathon Glassman, 'Sorting out the Tribes: The Creation of Racial Identities in Colonial Zanzibar’s Newspaper Wars', Journal of African History, 41 (2001), pp. 397-398 [17] Don Petterson, Revolution in Zanzibar: An American’s Cold War Tale (Boulder, Colorado: Westview Press, 2002), p. 94 [18] Marie-Aude Fouéré, ‘Reinterpreting Revolutionary Zanzibar in the Media Today: The Case of Dira Newspaper’, Journal of Eastern African Studies, 6 (2012), pp.679-680 [19] Abdul Sheriff, ‘Race and Class in the Politics of Zanzibar, Africa Spectrum, 36(3) (2001), p. 310 [20] Jonathon Glassman, 'Sorting out the Tribes: The Creation of Racial Identities in Colonial Zanzibar’s Newspaper Wars', Journal of African History, 41 (2001), p. 406 [21] Ibid, p. 406 [22] G. Thomas Burgess, Race, Revolution, and the Struggle for Human Rights in Zanzibar: The Memoirs of Ali Sultan Issa and Seif Sharif Hamad (Athens: Ohio University Press, 2009), p. 21 [23] G. Thomas Burgess, ‘Mao in Zanzibar: Nationalism, Discipline and the (De)construction of Afro-Asian Solidarities’, in Christopher J. Lee (ed.), Making a World After Empire: The Bandung Moment and its Political Afterlives (Athens: Ohio University Press, 2010), p.212 [24] G. Thomas Burgess, Race, Revolution, and the Struggle for Human Rights in Zanzibar: The Memoirs of Ali Sultan Issa and Seif Sharif Hamad (Athens: Ohio University Press, 2009), p. 21 [25] Marie-Aude Fouéré, ‘Reinterpreting Revolutionary Zanzibar in the Media Today: The Case of Dira Newspaper’, Journal of Eastern African Studies, 6 (2012), p. 679 [26] Kyu-Deug Hwang, ‘Revisiting the Politics of Zanzibar: In Search of the Root Causes of the 1964 Revolution’, International Area Review, 12(2) (2001), p. 27 [27] Ibid, p. 27 [28] Ian Speller, ‘An African Cuba? Britain and the Zanzibar Revolution, 1964’, Journal of Imperial and Commonwealth History, 35(2) (2007), pp.283-284 [29] G. Thomas Burgess, Race, Revolution, and the Struggle for Human Rights in Zanzibar: The Memoirs of Ali Sultan Issa and Seif Sharif Hamad (Athens: Ohio University Press, 2009), p. 2 [30] Ibid, p. 21 [31] Ian Speller, ‘An African Cuba? Britain and the Zanzibar Revolution, 1964’, Journal of Imperial and Commonwealth History, 35(2) (2007), p. 285 [32] Ibid, p. 285 [33] Ibid, p. 285 [34] G. Thomas Burgess, ‘Mao in Zanzibar: Nationalism, Discipline and the (De)construction of Afro-Asian Solidaities’, in Christopher J. Lee (ed.), Making a World After Empire: The Bandung Moment and its Political Afterlives (Athens: Ohio University Press, 2010), p. 213 [i] Zama za Siasa - ‘age of politics’ from 1957 to 1964; the democratisation process
- How far did More’s Utopia subvert the central principles of Renaissance humanism?
Thomas More’s 1515 text Utopia (fully titled On the Best State of a Commonwealth and on the New Island of Utopia) has become one of the foremost works of the Renaissance, celebrated both as a radical work of political philosophy and as an impressive literary feat on its own merit. A dialogue in two books, Utopia sees the well-travelled Raphael Hythloday describe in minute detail the fictional island of Utopia – its culture and governmental institutions, and its happy citizens, who are “so well governed with so few laws.”[1] With all property held in common supposed yielding harmony and satisfaction, Hythloday holds up the Utopian system as the ideal society – such that the name ‘Utopia’ has come to denote the societal perfection. However, despite its current status and influence, Utopia was considered to be problematic to the humanist movement upon publication. Responses from More’s fellow humanists were lukewarm: Guillaume Budé, for example, was unsure if it should be read literally or allegorically.[2] Even More’s close friend Desiderius Erasmus was not forthcoming – his commendation of the work came uncharacteristically late (only published in 1518), and offered the wistful reflection that More would have been better pursuing scholarship exclusively, rather than a disruptive legal career.[3] Perhaps because of such responses, Utopia was only ever read by a minority of European humanists, leading many to the conclusion that Utopia was not meant to elucidate but rather to critique humanist principles by analogy, showing their potential to corrupt. Before any meaningful statement can be made in reference to this problem, it is necessary to survey the many historiographical contributions to this debate, upon which much of this analysis shall rest. Central to the scholarship on More is the issue of how seriously one should take Utopia as a model for society. Was it merely a constructed literary conceit, or, in the style of Plato’s Republic, is it a serious suggestion for the transformation of Europe? In other words, is Utopia an idyll or an ideal? Foundational contributions to this debate came from J. H. Hexter, who saw Utopia as More’s vision of the perfect Christian commonwealth, a thesis developed most influentially by Quentin Skinner, who argued that this conception of a Christian ideal aimed to take northern humanist principles to their logical conclusion, creating an image of what humanism in action looked like.[4] A convincing alternative to this position came from Dermot Fenlon, who saw Utopia not as a radical defence of humanist ideals, but rather as a warning that humanism may be misguided.[5] G. R. Elton similarly considered Utopia a scathing satire on Christian humanism.[6] More recent work has attempted to situate More within the longer-term history of political thought, interpreting his attack on property rights and his vision of communal ownership as a forerunner to Marx and the Christian socialist tradition.[7] However, in order to explore the subversive nature of Utopia in any meaningful way, the text must necessarily be read within its immediate context – that of Christian Europe and the Northern Renaissance, rather than that of 19th- or 20th-century political theory.[8] Therefore, this essay shall proceed by taking account of precisely what humanism was and how it manifested in More’s text, before examining how More dealt with several core humanist issues relative to other humanist writers across the continent. This analysis will allow for a deal of development upon the positions of Skinner and Hexter, concluding that the aim of Utopia was not to subvert but to adapt humanism into a more pragmatic model; thus, it was a different kind of humanism, not a subversive critique. What were the principles of ‘Renaissance humanism’, and where were they realised in Utopia? It should be noted that More’s status as a humanist is rarely questioned: his interests and commitments align with those of other humanist thinkers across the continent.[9] Many of the most famous humanists were concerned with reforming individuals and society, challenging corrupt or hypocritical institutions, and promoting a certain kind of Christian spirituality focused on virtuous deeds rather than ritualistic practices. Parallels on all of these fronts can be drawn. For example, where Roger Ascham’s Schoolmaster and Erasmus’ Education of a Christian Prince advocate principled classical education as a means to ensure societal harmony, More portrays the Utopians as a people who “delight in education,” access to which is “provided to every child.”[10] Similarly,Utopia depicts an idealised system of ecclesiastical power in which a strict moral code governs those at the top of the hierarchy, suggesting More’s shared concern with Erasmus’ satire on church hypocrisy in the Praise of Folly and (more directly) in John Colet’s famous sermon on the moral virtues of priests.[11] The simplicity of the philosophia Christi is replicated in Utopian religion, which adheres to Christian morality, but not to the sterile ritual observances.[12] Even in terms of form and style, there is much to identify Utopia as a humanist work. More uses another’s voice (that of Hythloday) to discuss the merits of the Utopian system, as was common in many contemporary humanist works.[13] Indeed, the cryptic pun within the island’s name itself contains echoes of Erasmian wit: “Utopia” can be translated as “nowhere”. The Renaissance humanist thinkers were thus primarily concerned with education and Christian virtue on the individual level, and its relationship to major societal reform; ostensibly at least, Utopia shares in these values. Even before he begins constructing Utopian society, More provides hints in his writing about humanists themselves. Though he does not reference any thinkers by name, by mentioning those philosophers who had the ear of princes and kings, it is safe to assume More had in mind the humanists employed in courts around Europe. The dialogue reconstructs a central debate on Plato’s notion of ‘philosopher-kings’, with Hythloday arguing it philosophers as advisors would be futile, since “unless kings become philosophical themselves” they would be too “infected with false values from boyhood” to even understand or accept the philosophers’ advice.[14] The fictional ‘More’ retorts that “you must not abandon the commonwealth” simply because it is difficult to “pluck up bad ideas by the root” through persuasion.[15] This debate – between the Hythloday’s apathy and More’s constructive pessimism – can help to frame our interpretation of the text, for it contains both a rejection of mainstream humanism and a recommitment to humanist ideals. While he recognised the obvious limitations of influencing from the top-down (limitations which Hythloday considers insurmountable), ‘More’ remains convinced that humanists had a duty to try, as some mild reforms would undoubtedly still be better than none at all. Skinner related this to More’s professional life: whilst composing the text in 1515, he was in the process of accepting a post in Henry VIII’s court, where he could wield at least some degree of influence over political affairs. This defence of an active, “Ciceronian humanism,” as Skinner calls it, is a means by which More justifies (perhaps both to himself and others) his move into the world of court politics.[16] Furthermore, the discussion in Book I on the humanist philosopher’s role crucially reveals More’s approach to be that of a realist, who sees the limited scope for radical change in the real world and accepts necessary compromise. More’s realism must inform how we analyse the text going forward. How, then, did Utopia deal with Christianity? Although Utopians are not formally Christians, they are deeply religious, with some apparently more pagan (worshipping, among other things, the sun and the moon), but “the vast majority, and those by far the wiser ones” pray to something akin to a Christian God and follow moral codes similar to Biblical teachings.[17] That the Utopians are so open to Hythloday’s proposal of conversion demonstrates the compatibility of Utopian rationality with Christian morality. Crucially, however, in the absence of a priest, the Utopians have no access to many of the sacraments.[18] This resonates with the humanist critique of “the sterility and formalism of the contemporary church” so central to the work of Erasmus: while the Utopians do not follow the rigorous performative demonstrations of faith, they are still good Christians because of their virtuous deeds.[19] However, we should be careful of reading an unconditional vindication of the philosophia Christi. As many scholars have since pointed out, this seemingly perfect religious culture contains many contradictions which More may have intended as a rebuke rather than anendorsement. For example, while apparently a free society, Utopia still permits enslavement and forced labour; despite its apparent social liberalism and tolerance, premarital sex, seduction and adultery are “punished with the strictest form of slavery”; and, perhaps most glaringly, Utopians’ claims of pacifism are undermined by its use of mercenaries to fight on their behalf.[20] Therefore, the notion that Christian morality had been the foundation for the ideal humanist society is presented as an illusion. More could thus be seen as subverting the core principles of humanism to expose its fundamental problems. However, cogent as this analysis may be, to simply dismiss Utopia’s religious concerns as a subversive critique of humanism is perhaps to oversimplify the author’s more multifaceted purpose. It is not inconceivable that the realist More was simply illustrating how humanist ideals of virtue over ritual would play out when implemented in practice. This by necessity means demonstrating the areas which, in More’s view, the principles of humanism would fail to live up fully to those of Christianity. If thus interpreted, More intended Utopia to be the purest expression of humanist principles as they would function in real life, not a subversion of them for the purposes of academic disputation. Perhaps the most fundamental idea – both to humanism and to More’s Utopia – is the central concern for reform. Here, there appears to be a gap even more glaring between More and his fellow humanists. The foundational assumption of many other humanist writings is that individual reform was prior to societal reform, not the other way round. This notion frames works like Erasmus’ Christian Prince, which takes as given the trickle-down effect of a good ruler to their populace; such an assumption, Hankins has argued, permeates a great deal of humanist thought during this period.[21] Utopia, however, suggests otherwise – that individuals can only be reformed once the societal institutions which shape so much of their lives are transformed. In one heavily-analysed passage of Book I, Hythloday admonishes the barbaric treatment of the working poor which forced them to break the law to survive: in a populist rhetorical flourish, he asks, “what else is this, I ask, but first making them thieves and then punishing them for it?”[22] More’s preference for a systemic approach to understanding individual moral failings meant he advocated different and more radical solutions. Where other humanists simply thought better leaders would change their societies for the better, More shows a society securing humanist principles through communist ownership of resources, suggesting economic transformation was preferrable.[23] While Budé defended the system of private property and hereditary nobility on the grounds that hierarchy and “pre-eminence” were the foundation of stable societies, More, on the other hand, attacked this entrenched inequality of wealth and asymmetry of power.[24] But is this necessarily a subversion? To suggest so is again to ignore the purpose of Utopia in More’s own mind, which was to show how humanist reforms would manifest in the real world. With other humanists, he shared the same end-goal of a society freed from the rigid demands of medieval institutions; where he diverged was in the means towardthese ends. For the very reasons laid out in Book I, enlightened rulers well-versed in classical learning would not be enough. In this way, Utopia did not subvert humanist principles: it was intended to illustrate the lengths necessary to see them realised. However, an honest account of Utopia’s relationship to humanism must apply certain caveats to its conceptual framework. Firstly, one should be cautious of assuming Utopia to be Thomas More’s ideal society. There is a great deal on the island which More would have abhorred: the permissibility of euthanasia, for example, would not have aligned with his devout Catholicism.[25] Furthermore, the way in which we define “humanist principles” must be carefully nuanced, as (quite clearly) not all humanists agreed. As Hankins suggested, it may be more useful to think of humanism not as “a system of thought, but a climate of thought,” within which debates on all issues were encouraged.[26] A climate of thought is, however, harder to precisely articulate than a system: without core tenets based widespread agreement, it is harder to know exactly what counts as “subversion”, complicating our understanding of Utopia’s link with humanism. In place of something more concrete, this analysis of Utopia may be most clearly expressed in the terms of John Guy: “Erasmus was the most scintillating classical scholar in Europe, but his humanism is less complex than More’s… The difference between Erasmus and More is that Erasmus was a scholar and More wanted to put his humanism into practice.”[27] This notion of multiple ‘humanisms’ rather than a singular coherent doctrine provides a solid conceptual grounding on which to evaluate Utopia. Within the qualifications already outlined, it can be plausibly argued that More’s text was not intended as a subversion of the principles of Renaissance humanism; it has nonetheless been perceived as such because of humanism’s plurality as an intellectual movement. The question of whether Utopia bolstered or undermined humanism’s core tenets is therefore misguided: the text represented one of many variants of the humanist tendency of thought. In conclusion, Thomas More’s Utopia is a text which defies neat categorisation; any serious attempt to understand it requires a careful assessment of the intellectual climate which produced it and the individual idiosyncrasies of the author’s own ideas compared to those of his fellow humanists. That the text has been so variously interpreted is a consequence both of the uncertainty over More’s own sincerity, as well as of the incoherence of humanism as an intellectual movement, which could not be characterised as a rigorous school of thought. What can be definitively said is that Utopia is a text profoundly concerned with certain philosophical themes characteristic of Renaissance humanism, and intended to spell out how these concerns may be met with genuine solutions for the benefit of the commonwealth. Understood thus, Utopia can be seen as an attempt not to subvert or undermine, but to put into practical terms humanist principles, so that they may be realised in the societies of contemporary Europe, rather than simply in closed academic discussions. Mark Connolly has just completed his 3rd year of an MA in Medieval and Modern History at the University of St. Andrews. Notes: [1] Thomas More, Utopia, ed. and trans. George M. Logan and Robert M. Adams (Cambridge, 2002), p. 37. [2] Guillaume Budé, letter to Thomas Lipset, in Thomas More, Utopia, ed. and trans. George M. Logan, Robert M. Adams and Clarence H. Miller (Cambridge, 1995), pp. 7-19. [3] Erasmus, in John Guy, Thomas More (London, 2000), pp. 91-92. [4] J. H. Hexter, ‘Introduction’, The Complete Works of Thomas More: Volume 4 (New Haven, 1963), pp. xv-cxxiv and Quentin Skinner, ‘Sir Thomas More’s Utopia and the Language of Renaissance Humanism’, Anthony Pagden (ed.), The Languages of Political Theory in Early-Modern Europe (Cambridge, 1987), pp. 123-157. [5] D. B. Fenlon, ‘England and Europe: Utopia and its Aftermath’, Transactions of the Royal Historical Society, 25 (1975), pp. 115-135. [6] G. R. Elton, Reform and Reformation: England: 1509-1558 (London, 1997) [7] Frank E. Manuel, Utopian Thought in the Western World (Cambridge, 1979), pp. 697-716. [8] Brendan Bradshaw, ‘More on Utopia’, The Historical Journal 24:1 (1981), pp. 1-27. [9] James Hankins, ‘Humanism and the origins of modern political thought’, Jill Kraye (ed.), The Cambridge Companion to Renaissance Humanism (Cambridge, 1996), p. 137. [10] Roger Ascham, The Schoolmaster, ed. Lawrence V. Ryan (Ithaca, 1967); Desiderius Erasmus, The Education of a Christian Prince, ed. and trans. Lester K. Born (New York, 1965); John Guy, Thomas More (London, 2000), p. 87. [11] Erasmus, Praise of Folly (London 1993); John Colet, A sermon of conforming and reforming: made to the Church in London (Cambridge, 1661) [12] More, Utopia, p. 93. [13] Clare Carroll, ‘Humanism and English literature in the fifteenth and sixteenth centuries’, Jill Kraye (ed.), The Cambridge Companion to Renaissance Humanism (Cambridge, 1996), pp. 250-251. [14] More, Utopia, p. 28. [15] Ibid., p. 35. [16] Skinner, ‘Utopia and Renaissance Humanism’, pp. 134-135. [17] More, Utopia, p. 93. [18] Ibid., p. 94. [19] Bradshaw, ‘More on Utopia’, p. 3. [20] More, Utopia, pp.77-80, 88-89 [21] Hankins, ‘Humanism and political thought’, p. 119. [22] More, Utopia, p. 20. [23] Ibid., pp. 103-106. [24] Guillaume Budé, Education of a Christian Prince in Quentin Skinner, The Foundations of Modern Political Thought (Cambridge, 1978), p. 240. [25] More, Utopia, pp. 78-79. [26] Hankins, ‘Humanism and political thought’, p. 118. [27] Guy, Thomas More, p. 213.
- A heap of masks buried in Syrian sands: The faces of Queen Zenobia and the woman behind them
In the turbulent days of the late third century CE, in the year 268, a frontier region of the Roman empire, Palmyra, crucial in the titanic Romano-Sassanid conflicts but otherwise mostly overlooked by the imperial centre, rose to prominence under the rule of a local noblewoman, Bat-Zabbai, or, more commonly known as Septimia Zenobia. In a matter of years, the female ruler of Palmyra led her modest kingdom, hitherto a mere client-state of Rome, in a lightning conquest of the Roman East. Syria, Egypt, half of Anatolia, and parts of Arabia all fell under her sway whilst the rest of the empire was in disarray following the deaths of Gallienus and Claudius II, divided between rival pretender-generals, such as Tetricus in Gaul and Aurelian in Pannonia. However, a mere four years later, Zenobia, who had declared herself augusta, was decisively defeated by the latter, and her erstwhile empire returned to imperial control. Her rise and fall, so unprecedented, so spectacular and so controversial, have ensured that, though she ruled for a very short period, she was a much discussed figure and symbol, not least during her rule, but certainly for many centuries afterwards. The intentions behind her entry into the chaotic fray of the ongoing Roman civil war, her ambitions, feature as the cornerstone of the interpretations of Zenobia the queen by both her contemporaries and later observers and historians. Even within the Historia Augusta, for example, our main, though rather problematic, as we will see, source on the Queen, there exist several Zenobias: a brave and manly ruler meant to highlight the faults of the disgraced Gallienus, and, at the same time, a cowardly, conniving, seductive, oriental but otherwise able woman tyrant, meant to emphasise both the ascendant Aurelian’s achievement of defeating her, and the absolute necessity for doing so. Thus, while the Historia Augusta (c. 395), is a useful document for understanding imperial attitudes towards Zenobia (and others) and how these attitudes manifested themselves in court propaganda, the distance between ourselves and the real Zenobia is further increased. Other histories, such as those of Zosimus (c. 500), Syncellus (c. 800) and Zonaras (12th century), also mention Zenobia but are fraught with their own issues, being several centuries removed from the events and the woman in question.[1] Therefore, who the real Zenobia was and the reasons behind her defiance of imperial authority, lie wreathed in the shadows of antiquity, further clouded by the passage of time, the fading of historical memory, the political machinations of her contemporary rivals, and the biases of historians. There exist several Zenobias, like a heap of eroded, bronze masks, half-covered by the desert sands of Syria, and in this article, I will attempt to visit them one by one, and, through alternative interpretations of the sources at hand, recover the real Zenobia and the ambitions which underpinned her rise and led to her fall. In the end, if one is to disregard the mythical, legendary and political dimensions of Zenobia, one key question emerges: was Zenobia yet another ambitious pretender at the fringes of the Empire, making a bid for the imperial throne, for herself and for her son, Wahballath, as so many had done before and would do so afterwards? Or were there deeper reasons behind her defiance of the imperial centre? Part I: Zenobia in the Historia Augusta At first sight, events and appearances, do not favour an alternative explanation and the prevailing is certainly the one sponsored by the royal court of Aurelian, leading up to and after the defeat and capture of the Queen, passed down to us through the Historia Augusta. Zenobia was wife to Odainath (Odaenathus), who, as king of Palmyra and client of the Romans, had achieved many victories against the encroaching Persians to the East, even marching as far as their capital of Ctesiphon[2], a loyal and dependable asset of the empire in the East. As the author of the Historia Augusta assures us, Odainath, cum uxore Zenobia, would have restored non solum orientem, but indeed the world entire[3]; great things awaited the couple. However, in typical fashion, reminiscent of Philip II’s demise seemingly on the eve of his own long-awaited campaign against Persia, tragedy struck and Odainath was assassinated by his cousin Maeonius, who, himself was killed shortly after by Odainath’s enraged soldiers, before being able to testify to the conspiracy behind the murder.[4] The Historia itself first ascribes his motivations to simple envy, before moving on to claim that, ‘it is said’, Zenobia had conspired with him in order to assassinate her husband and his firstborn heir from a previous marriage, a certain Herodes, so that her own son(s), Herennianus (and Timolaus) would inherit the throne instead[5], once again reminiscent of Queen Olympias of Macedon and her desire to see Alexander III on the throne... Successful in her deceit, Zenobia seized imperial power, ‘ruling longer than could be endured by one of the female sex’, and proceeded to present her sons in imperial paraphernalia, the purple robes of Roman emperors, including them in public gatherings.[6] Meanwhile, she crushed a Roman army meant to be marching against Persia[7], and then expanded her realm to include almost the entirety of the Roman East, stopping just short of Asia Minor, at the westernmost point of her expansion[8]. When Aurelian, finally having consolidated the situation in the Roman West, prepared to march against her, she replied to his demand for her surrender, arrogantly and defiantly, claiming, in her supposed hubris, that she wielded the entire power of the East, expecting reinforcements from Persia and Armenia alongside her own ‘brigands of Syria’ and Arab nomads.[9] This was an oriental queen, another Cleopatra, purportedly by her own admission[10], a figure familiar to Roman audiences: a deceitful, power-mad, temptress from the East. As a woman, she had risen too much above her perceived station in society, and needed to be soundly defeated by a Roman emperor who would remedy the outrageous imbalance of power, much like Augustus himself had, a few centuries earlier with Cleopatra. Aurelian was the one to fill those shoes and, after the tide of war shifted and Zenobia was on the retreat, he made his feelings clear, according to the Historia Augusta: ‘she is fearful like a woman’.[11] Worse still, this oriental pretender was not simply a coward, but a treacherous coward. Zenobia, after having been defeated on the field, attempted to flee to none other than the Sassanid Persians, seemingly disregarding that her late husband and to an extent she, herself, had been sworn enemies of Persia up until this point, if we are to believe the author of the Historia Augusta.[12] She was, however, captured, marched in the streets of Rome in golden chains, and allowed to live ‘in the manner of a Roman matron’ close to Rome, with her children until the end of her days.[13] This version of events, however, is not one that even the Historia Augusta itself agrees with, ultimately. For, this is mostly the Zenobia who appears in the section of the text devoted to Aurelian, presented thus expressly for the glorification of the emperor himself, the man who restored the integrity of the empire after the death of Claudius II. She was meant to be a vile, power-hungry, tyrannical easterner whose defeat was imperative for the survival of the empire, as she sought not only to carve out a realm of her own in the East, but to usurp the imperial throne itself (!) aided by hordes of oriental barbarians and brigands as well as the Roman nemesis, Persia. Aurelian could not have faced a more suitable foe if he was looking for one. Even here, interestingly, the author of the Historia Augusta, still unknown to us, reins himself in so as not to present Zenobia as too easy an enemy and thus diminish Aurelian’s triumph. Instead, he has Aurelian clarify, in his purported letter to his praetorian notarius, Mucapor, that he is not ‘merely waging a war with a woman, as if only Zenobia and with her own forces were fighting against me and not just as many enemies as if I was making war against a man.[14] He then goes on to explain that he is essentially outnumbered and underequipped, the implication being that in this manner, even a woman could prove a worthy adversary.[15] In the Odainath chapter, Zenobia, briefly appears as not only a loyal but also a supportive wife who accompanied her husband in his campaigns often enough that she came to be accustomed to the hardships of the march, the harsh desert conditions and the unrelenting sun, and ‘in the opinion of many was held to be more brave than her husband’.[16] If this is not enough of a deviation from the accusations the author of the Historia Augusta would level against Zenobia in the later Aurelian chapters, he goes on to write that she was considered ‘indeed, the noblest of all the women of the East, and […] the most beautiful.’.[17] This more favourable characterisation of Zenobia arguably seems to be a by-product of the fascination and boundless admiration the historian felt towards her husband, Odainath, and the truth yet eludes us. The sympathetic descriptions of Zenobia continue and are elaborated upon in the chapter devoted to her, the one devoted to Emperor Gallienus and finally, to Claudius II. Immediately at the outset of the Zenobia chapter of the Thirty Pretenders, the Historia Augusta spells out one of or even the main reason behind this more positive approach to the Queen, and that is ‘while Gallienus conducted himself in the most evil fashion, even women ruled most excellently.’.[18]Indeed, the author of the Historia, reiterates this idea on several occasions, such as in the Gallieni chapter where he decries the weakness and debauchery of Gallienus ‘so that even women ruled better than he.’[19], and nowhere more clearly than in Divus Claudius where he laments that ‘things had come to such a point that, for the sake of comparison with Gallienus, I was forced to write even the lives of women.’.[20] When compared to a ruler as vile and as impotent as Gallienus, whom the writer of the Historia Augusta, particularly despises, even a woman, like Zenobia, could emerge as a positive ruler… In order to underline Gallienus’ incapacity as emperor, the Historia Augusta brings itself to recognise the positive elements of Zenobia as woman and as queen, before discarding them when it is time for Aurelian to shine as the better ruler. Regardless, here many lines are drawn between Zenobia and other great women of antiquity, such as Cleopatra and Dido[21], more female ‘Others’ of Roman imagination, foreign queens with whom Rome – and its men – had become embroiled. She is described as proud and as an able stateswoman who managed to keep the East together in the tumultuous time of Gallienus and Claudius’ emperorships.[22] Aurelian echoes this sentiment in a supposed letter of his own to the senate, after having defeated and captured her, when he tries to justify granting the ‘honour’ of leading her in a triumph, as if she was one of the great barbarian warlords of old, like Vercingetorix. In this letter, were we to accept its authenticity, he describes her as ‘wise in counsels’, ‘steadfast in plans’, ‘firm toward the soldiers’, ‘generous when necessity calls’ and ‘stern when discipline demands’.[23] Furthermore, he, too, recognises the service Odainath and Zenobia, and then the latter alone, performed for the empire in defending it from the ever-present dangers of Persia and nomadic Arab – Saracen – invasions and raids, particularly when Claudius himself was fighting against the Goths.[24]Whether this was a begrudging recognition of a worthy foe by a plainspoken military man or another attempt to build Zenobia up so that her defeat would be all the more glorious, we cannot know for certain. Her appearance is commented upon, with great emphasis on her beauty, but this is beyond the scope of this article – as well as the knowledge of the author of the Historia Augusta – and it is not relevant to her personality or her aspirations. Similarly to Cleopatra and other larger than life women in Roman history, Zenobia, by frequent implication, is often reduced to her external beauty. The Palmyrene Queen, however, was much more than her looks, by the Historia Augusta’s own admission. She was stern but also forgiving, depending on the occasion, and was careful with how she managed her financial affairs.[25] She was chaste, faithful to her husband, and also very educated, well-versed in Greek and Egyptian, and adequate with Latin, as well as knowledgeable in Greek and Roman history.[26] In private audiences, she was purported to present herself in demonstrably eastern, Persian(-ising) ways, receiving worship and deference, but in public assemblies would present herself in imperial, Roman terms.[27] More interesting than these details, though, are the ways in which the author of the Historia Augusta attempts to ‘de-feminise’ Zenobia, ascribing her good traits as ruler to her more manly characteristics. She possessed a vox clara et virilis, a clear and virile voice, he writes, and while she used a pilentum, a luxurious coach, usually for noblewomen, she also often used a horse or even walked on foot alongside her soldiers for several miles.[28] She participated in hunts, ‘with the eagerness of a Spaniard’, and drank with her generals as well as with Persian and Armenian emissaries, ‘only for the purpose of getting the better of them’.[29] Was this last practice, Zenobia engaging in military camaraderie or a calculated move to demonstrate her authority to her subordinates by outdoing them in traditionally male (symposiac) activities? In the Gallieni chapter of the Historia Augusta, it is unequivocally stated that Zenobia did not rule ‘in a feminine fashion’ but rather ‘with the firmness of a man’.[30] Once again, one could argue that it is written thus with a dual purpose: to show that Zenobia’s achievements were not those of any woman but rather a very manly woman, and to demonstrate Gallienus’ own lack of virility in allowing himself, in his decadence, to be bested by a woman in the art of rulership. In success and victory, Zenobia was treated more like a man, whereas in defeat, at the hands of Aurelian, she was safely returned to her traditional role as a woman: deceitful, weak and cowardly. Part II: The ‘Real’ Zenobia As it has been stated before in this article, the Historia Augusta, beyond the issues with its historical accuracy, is demonstrably self-contradictory on several points regarding Zenobia. In it, we get a very polarised and confused version of the Queen: brave, moderate and wise but also deceitful, fearful, and power-hungry beyond measure. In the second part of this article, a positively critical approach will be assumed to the matter, an attempt will be made to bring the real Zenobia closer, as much as the historical record allows. All of us are products of the times we inhabit, and Zenobia was no different. She lived in a time (and a place), plagued by constant war, civil strife, disease, and political and social instability. The Sassanid Persians kept launching large-scale invasions of Syria, every time seemingly thwarted by a miracle, especially as emperors were unable to respond effectively, facing other threats in the west and north, as well as constant civil wars. Syria in particular was rocked by numerous uprisings of imperial pretenders such as Iotapianus and Mariades, as did the rest of the empire, in a period lasting from 235 to 285 CE.[31] All Odainath and Zenobia had known was an empire fraught with invasion, civil strife, economic stagnation and political upheaval, in the infamous ‘Crisis of the Third Century’. It would come to inform the policies of both as rulers. In order to restore some semblance of stability to his own realm, Odainath followed a policy of nigh unwavering loyalty to the imperial centre, moving to fill a power vacuum left behind by the impoverished civic elites of the weakened empire, who would normally assume the governorships of the provinces. By the early 250s, Odainath and his son, Herodian, had been granted the status of senators, either by Gordian III (238-244) or by Philip ‘the Arab’ (244-249), and the former ruled supreme in Palmyra as client king. In 261, Odainath helped defeat the remnants of Macrianus’ uprising in Syria, securing it for emperor Gallienus, who effectively handed him control of the entire province, now as its governor.[32]Cooperation with the imperial centre had benefited Palmyra well and Zenobia would never forget this, especially as its ruler. By the late 260s this relationship was flourishing, but as a candleflame burning the brightest before dissipating. Emperor Gallienus or a powerful faction in his court, had long regarded Odainath’s growing power with suspicion but were unable to move against him with new threats in western Europe and the Balkans. In 268, they decided to act, possibly collaborating with a faction of Palmyrene elites who resented the ever-increasing power of Odainath[33], and the king along with his son and heir, were assassinated. Members of this anti-Odainathian faction were not only the current emperor, Gallienus, but also his cavalry commander Claudius, his praetorian prefect Heraclianus and another officer, Aurelian.[34] Soon enough, these three murdered the emperor, with Claudius II usurping the imperial throne and appointing Aurelian as his own cavalry commander. [35] The third century bloody wheel of Roman politics kept turning. Though the Historia Augusta, fiercely devoted to repeating Claudius and Aurelian’s court propaganda, implies that it was during Gallienus’ emperorship that an army was sent against Persia, to be intercepted by Palmyra, in late 268 under Heraclianus, both it and Zosimus mention the latter as being in Milan, at the same time, assassinating emperor Gallienus.[36] Thus, the likeliest scenario is that the assassins Claudius and Aurelian, confident that they had crippled Palmyrene power with Odainath’s death, sent an army under their co-conspirator, Heraclianus, to Syria in order to reclaim it from the distracted Palmyrenes, under the pretence of having dispatched it to march against Sassanid Persia.[37] As the Historia Augusta itself mentions, this army was thoroughly defeated by an ascendant Zenobia[38], who had seen through the ploy. Therefore, it is very likely that at this point, imperial propaganda was mobilised to paint Odainath from a power-hungry would-be eastern dynast, to a loyal son of Rome who was assassinated by a deceitful and ambitious wife in order to place two of her children on the throne, disregarding the fact that Herennianus and Timolaus likely never existed.[39] Marching against her now would be an act of righteous imperial vengeance against an oriental usurper, and the murder of Odainath could never be ascribed to Gallienus and his successors. Zenobia, however, did have children. One, we know for certain, and this was Wahballath, in whose authority she would assume the reins of power in Palmyra, but there were potentially others. She had seen how ruthless the imperial court could be, murdering her husband and his son, and indeed she would have known that it was likely only because she and her son had not been considered a threat, that assassins had not been dispatched against her as well. Now, after taken on the mantle of Palmyrene leadership, she would have had no doubts that the same fate would await her and her children if she failed to take control of the situation. Zenobia was a mother and it was only through her becoming queen of Palmyra, that she could even hope to protect her children. As wife to Odainath and mother of his remaining children, Palmyra swiftly recognised Zenobia’s own authority and she would have likely not wasted any time at all, internally, eliminating the Palmyrene conspirators who had murdered her husband.[40] Abroad, however, besides ceasing the minting of coins with Gallienus’ face in Antioch, she retained all semblances of loyalty to the imperial centre, and months before the emperor himself was murdered by his officers, she sent an embassy, charging a member of his household with Odainath’s murder, without any real effect.[41] Once Claudius was emperor, no more embassies seem to have been sent. Even so, between 268 and 270, Zenobia did not break away from Roman authority, rather content to govern a smaller amount of land than her husband did, likely to not antagonise the imperial centre by reclaiming it, attempting to stabilise the borders with Persia and the nomadic Arabs, and granting her son the same titles his father possessed, ‘King of Kings’ – a common title in the eastern part of the empire – and epanorthotes (Latin: restitutor).[42] Coins began being minted in Antioch once more, now with the likeness of Claudius II, not Zenobia nor Wahballath, whom she endeavoured to present as yet another Roman governor, successor to his father.[43] This, arguably, seems a far cry away from the actions of a power-mad usurper, and more an attempt by a careful and politically astute woman to not provoke powers greater than hers, in order to protect her realm and her children. Despite her efforts and her displays of loyalty, the imperial court of Claudius II seemed implacable: it would not recognise Wahballath as the inheritor of his father’s powers and authority, and would continue to undermine Palmyra, until such time as a more active, military intervention was possible, given Claudius’ own problems in the West.[44] Predictably, Zenobia could not afford to wait, especially after the potentially near miss of Heraclianus’ expedition against her, sometime in 269 or 270. In spring of 270 CE, she launched a wide-ranging offensive with the goal to seize the wealth of the rest of the Roman East as well as strategic positions against a potential invasion. However, even so, she had not yet given up the possibility of a reconciliation with Rome, for the sake of her people and her children: she minted coins with Wahballath but in a context demonstrably inferior to the emperor, as wearing a laurel wreath crown against the emperor’s sunrays, and avoided assuming the title of augusti for herself and her son.[45] Indeed, she was trying to force Claudius II into negotiations with her, by creating another front for an emperor already facing trouble elsewhere. In her attempts to do so, she never sought to undermine the Roman state itself, and, after seizing Egypt in 271, she seems to have continued dispatching the regular grain shipments to Rome[46], when a very Machiavellian manoeuvre could have been to starve the capital to create further pressures on the emperor. Regardless, her incursions seemed to have brought about the opposite results, both Claudius and his successor, Aurelian, becoming more entrenched in their opposition to Palmyra. Still, Zenobia persisted for the better part of her reign, and even when she was at the height of her power, with a realm encompassing the majority of the Roman East, she maintained that she was acting on behalf of the emperor and serving the Roman state.[47] She refused to assume any titles for herself or her son that would imply an intention of usurpation of the imperial throne, like so many before her had done, and continued hoping for a diplomatic solution that would return things to the Odainathian status quo. Hardly an ambitious pretender, as the Historia Augusta would have it, but rather one for whom an expanded and richer realm was not the end but the means to protect herself and her family. By 272, however, Zenobia likely admitted the failure of this approach, as it is then that we begin to see the titles augusti being used by her and her son.[48] The only apparent way to secure her position, was direct confrontation with the imperial centre, now at the hands of Aurelian. Presenting herself as Augusta/Sebaste, Zenobia began minting coins depicting herself as an austere, venerable Roman woman, in the likeness of other imperial women of the time, and her son as Caesar/Kaisar and Augustus/Sebastos, a young emperor, all accompanied with Roman gods such as Jupiter, Venus and Victory on the reverse.[49] Once again, this could not be further than the image of the mystifying Oriental temptress posited by the Historia Augusta; Zenobia was presenting herself as a Roman empress of piety and virtue. For all her initial successes, she was defeated quite rapidly, in two pitched battles, which is perhaps an indication to another tragic aspect of her attempts to protect her people and her children: that, even with most of the Roman East under her sway and despite Aurelian’s supposed claims to the opposite, Zenobia was not able to mobilise an army powerful enough to face the concentrated might of the rest of the empire, and, despite knowing this, she attempted it regardless. After being captured, as Andrade very astutely points out, she fed into Roman biases of female frailty and weakness of spirit, and led Aurelian to believe that she had been manipulated herself by certain elements of her court.[50] By giving them up, she convinced the emperor, and managed to save herself and her children, certainly Wahballath. This is likely where the Historia Augusta’s claims of selfishness, cunning, and cowardice are rooted, but, in an era where entire families were put to the sword at the smallest indication that they could prove a threat to the imperial throne, Zenobia made the most pragmatic choice available and ensured an outcome much better than most contemporaries in her position. Part III: Conclusion Thus, Zenobia, far from a ruthless and ambitious pretender and a deceitful oriental temptress, was a very complicated woman whose true character and entire breadth of ambition we can never know. At the centre of all her actions, however, was a deep desire to protect her people and her children, and in 270, she came to the conclusion that territorial expansion beyond the traditional confines of Palmyra, was the best way to do this. A faction of murderous usurpers in Rome and an increasingly unstable imperial state, were not able to offer her protection against the Persian threat to the East, and posed increasing danger to her by themselves. Her vulnerable position at the edges, surrounded by enemies, meant that her choice was potentially no choice at all, especially following the army sent after her late husband’s territories. In a wider context, Zenobia’s stance could perhaps be interpreted as an early indication of the later troubles of the empire, especially in the increasingly unstable West. Local elites to whom the administration and protection of the provinces had been entrusted, gradually saw support from the imperial centre dwindling and the emperorship and the man occupying it, less as a guarantor of stability and more as a source for the opposite, as the object of constant civil wars and the draining of resources to protect borders elsewhere. The desire for independence, still within a Roman world, was arguably not so much ambition-driven – though it very well could have, for many – as reality-driven: the centre was no longer able or willing to protect the periphery, and the periphery, risking invasion, impoverishment and the likelihood of ending up a battleground in the endless civil wars of the later empire, chose a different path, either aligning itself with the invading barbarians or that of de facto independence with a nominal recognition of the centre’s authority as centuries’ long interdependence had created bonds that were nigh impossible to break. Returning to Zenobia, it is preposterous, I would argue, to assume that she ever coveted the imperial throne itself, but rather desired a return to this older understanding between imperial core and periphery, whereby both emperor and elites could prosper and benefit mutually, as her husband had done with Gallienus and his predecessors. She did not desire a complete secession from the empire nor did she envisage Palmyrene rule over the Roman East. Until very late, she recognised where imperial power lay, even as its representatives desired her destruction, and presented herself in distinctly Roman contexts, even as she assumed the title of empress. She was, however, like other powerful women in Roman history, a very easy target for the propaganda machine of her enemies, which sought to portray her as a foreign, oriental, power-mad despot, in league with the hated Persians, who murdered her husband in order to make a bid for the imperial throne. Who was Zenobia in reality? Zenobia was a loyal wife to her husband who did not shy away from her role as consort, sharing in the same hardships and facing the same challenges. As a very intelligent and politically adept woman, she assumed the mantle of leadership after his assassination and took control of the situation immediately. Pragmatic to the very end, she did not succumb to ambition, even after consecutive victories and the exponential growth of her empire; she was diplomatic and careful, seeking reconciliation instead of more bloodshed. She was a pious, educated, wise, responsible ruler who seemed to genuinely care for the survival and prosperity of her people. She was also a very brave woman, leading her own armies, even as she was likely cognisant of the difficulty of her endeavour against the legitimate emperor Aurelian. Finally, and perhaps above all, she was a mother who sought to protect her children and ensure their survival too in a world which seemed to collapse before her very eyes, and her actions to the very end were to that effect. We will likely never know who the real Zenobia was, relying upon unreliable histories, the accounts of her enemies, and the few shreds of material evidence left to us, such as coinage and inscriptions, but hopefully this article has been able to provide a much clearer picture of what could have lied beneath this heap of bronze masks, buried in Syrian sands. As with most pieces of writing on ancient Palmyra and Zenobia after 2011, I would like to join in, in bringing the reader’s attention to the horrific cultural crime which took place when, in the midst of the Syrian civil war, ISIS terrorists destroyed large parts of the ancient city of Palmyra and looted its museum. I would also like to bring the reader’s attention to the death and sacrifice for ancient history and culture, of Khaled al-Asaad, the archaeologist who looked after Palmyra for decades, who was tortured and murdered by ISIS terrorists, after refusing to reveal where precious antiquities were hidden. Xenofon Kalogeropoulos is currently pursuing a DPhil in Ancient History at the University of Oxford (St. Anne's College) Notes: [1] Nathanael J. Andrade, Zenobia: Shooting Star of Palmyra, (Oxford: Oxford University Press, 2018), p. 2. [2] David Magie (trans.), David Rohrbacher (revised by), Historia Augusta, Volume III., Loeb Classical Library 263. (Cambridge, MA: Harvard University Press, 2022), p. 105, Odenatus, section 15. [3] Ibid., Odenatus, p. 106. [4] Ibid., Odenatus, p. 109, section 17. [5] Ibid. [6] Ibid., Herennianus, p. 133. [7] Ibid., Gallieni Duo, p. 43, section 13. [8] Andrade, Zenobia, p. 1. [9] Historia Augusta, Divus Aurelianus, p. 245, section 27. [10] Ibid., Zenobia, p. 137. [11] Historia Augusta, Divus Aurelianus, p. 243, section 26. [12] Ibid., p. 247, section 28. [13] Ibid., Zenobia, p. 143. [14] Ibid., Divus Aurelianus, p. 243, section 26. [15] Ibid. [16] Ibid., Odenatus, pp. 105, 107. [17] Ibid., p. 107. [18] Historia Augusta, Zenobia, p. 137. [19] Ibid., Gallieni Duo, p. 51. [20] Ibid., Divus Claudius, p. 155. [21] Ibid., Zenobia, p. 137. [22] Ibid. [23] Ibid., pp. 137, 139. [24] Ibid. [25] Ibid., p. 141. [26] Historia Augusta, Zenobia, pp. 139, 141. [27] Ibid., p. 141. [28] Ibid. [29] Ibid. [30] Ibid, Gallieni Duo, p. 43. [31] Andrade, Zenobia, p. 112. [32] Andrade, Zenobia, pp. 129-133. [33] Ibid., pp. 140-142, 145. [34] Ibid., p. 149. [35] Ibid. [36] Pat Southern, Empress Zenobia: Palmyra’s Rebel Queen, (London: Continuum, 2008), p. 89. [37] Maurice Sartre, ‘The Arabs and the desert peoples’, in A. K. Bowman, P. Garnsey and A. Cameron (eds.), The Crisis of Empire, AD 193–337, Cambridge Ancient History Vol. 12, 2nd edn., (Cambridge: Cambridge University Press, 2005), p 514. [38] Historia Augusta, Gallieni Duo, p. 43, section 13. [39] Andrade, Zenobia, p. 119. [40] Ibid., p. 165. [41] Ibid., p. 166. [42] Ibid., p. 172. [43] Hélène Huvelin, “L’atelier d’Antioche sous Claude II,” Numismatica e antichità classiche 19, (Milan: Quaderni Ticinesi, 1990). Roger Bland, “The Coinage of Vabalathus and Zenobia from Antioch and Alexandria,” The Numismatic Chronicle 171, (London: Royal Numismatic Society, 2011), pp. 138-139. [44] Andrade, Zenobia, p. 173. [45] Southern, Rebel Queen, pp. 78, 87. [46] Ibid., p. 115. [47] Annie Sartre, Maurice Sartre, Zénobie: de Palmyre à Rome, (Paris: Perrin, 2014), pp. 91-92. Andrade, Zenobia, p. 178. [48] ILS 8924=Bauzou and IGR 3.1065=CIG 4503b=OGIS 647, in Andrade, Zenobia, p. 191, also Appendix 3, 4f, 4d. [49] Andrade, Zenobia, pp. 195-196. [50] Ibid., p. 207.
- Native Americans and the ‘Plan for Civilisation’, c.1783 - 1830
Following the American Revolution, the new United States’ desire for more land was at the forefront of policy. Although the Indigenous peoples of North America appear much less frequently in post-Revolution historiography, they were actually at the centre of this issue. The so-called ‘empty land’ that the ‘new’ Americans coveted was historically Native American territory, meaning that the expansionist policies were ultimately dispossessing ones. Between the end of the Revolution and the Indian Removal Act of 1830, policy makers faced the challenge of placating potentially hostile Native nations, and simultaneously securing cessions of their land. In this period, the solution was to be a ‘plan for civilisation’ that would ultimately turn the occupants of United States territory into one nation. Thomas Jefferson envisioned the day when ‘‘we shall all be Americans’’ - or, when all abided by the white settler way of life.[1] Naturally, many Native American nations reacted to such ideas with hostility, resisting attempts to deprive them of their ancestral homelands, and sometimes went to war to defend their land rights. However, this was not a universal response. Some nations may have been ‘hostile’ in their attitudes, but felt forced to comply in their actions, whilst some actually worked with the US‘civilising’ efforts. Neither the Indigenous nations themselves, nor their reactions to ‘civilisation’, were monolithic. American thinking with regards to ‘civilising’ the Native Americans grew largely from the ideas of the Scottish Enlightenment, which perceived human difference to be the product of environment and experience. Many white European Americans believed that as Native Americans had not experienced the benefits of ‘modern’ society, they were kept in a state of savagery. It was thus the duty of the settlers to ‘civilise’ the Indigenous populations. This enabled them to justify the pursuit of Native land: as Native Americans became more ‘civilised’, they would no longer feel compelled to hunt, which would see them selling their excess land and living as farmers on smaller plots. The United States would aid this transition to yeoman farming by providing agricultural equipment and expertise. To the United States, therefore, there was only one viable definition of civilisation, and it was diametrically opposed to traditional Native American livelihoods and structures. However, philanthropism was not the only, or arguably even the true, motive. Robbie Franklyn Ethridge claims that the ‘hidden hand behind the plan for civilisation was United States expansion’.[2] Native Americans who held onto their ancestral homelands were obstacles to the US’s manifest destiny. The ‘civilising’ plan would surmount this by simultaneously converting Native Americans to ‘civilised’ society, while solving American land hunger. The plan also grew from the reservations of George Washington’s secretary of war, Henry Knox, with regards to using force: military action was more expensive and ‘more convenient than just’. Thus, the official rhetoric revolved around saving the Indigenous peoples from extinction in face of superior race, but the enactment would see their eventual disappearance through assimilation. Jefferson encapsulated this idea, claiming that ‘‘the ultimate point of rest and happiness for them is to let our settlements and theirs meet and blend together, to intermix, to become one people’’.[3] Becoming ‘one people’ would effectively eradicate Native societies - this ‘intermixing’ would eventually ensure white hegemony. It is therefore of little surprise that many nations resisted the plan. Throughout the 1780s, the new United States was determined to treat them as a defeated people, believing that they had a right to Indigenous territory. This was because many Native Americans had allied with the now defeated Great Britain during the Revolution, and were thus treated as a vanquished foe alongside the British. Native nations were consequently faced with many treaties that demanded great tracts of land, such as the 1784 Treaty of Fort Stanwix, the 1785 Treaty of Fort McIntosh, and 1789 Treaty of Fort Harmar. These were deemed to be fraudulent treaties: many Native delegates who were there refused to ratify them, and many nations were not represented at all. This caused much resentment among those who did not recognise the legality of these treaties, and many were not prepared to capitulate to their terms. In the 1790s, resentment at the terms of such treaties, as well as the frequent encroachments of white settlers onto their territory despite the massive cessions, led to the formation of the Western Confederacy. This was a loose alliance of Native Americans of the Great Lakes region, and included Miamis, Delawares and the Six Iroquois Nations. Despite their differences, they were united in opposing incessant US expansionism. Proportionally, the Battle of the Wabash in 1791 remains the worst defeat in American military history: approximately 700 US troops out of 2,000 were killed, along with 100 women and children who were following the army.[4] There was clearly a strong atmosphere of hostility amongst these varied nations. This spirit continued throughout the early 1790s, with hundreds of Cherokee, Creek and Shawnee warriors meeting in Tennessee to consider attacking a local white town in retaliation for their settler encroachments. According to an eyewitness, they performed a war dance around and shot at an American flag, in a flagrant display of animosity towards the US.[5] The resistance culminated in the 1794 Battle of the Fallen Timbers, in which the Western Confederacy was defeated and presented with little option but to agree to the Treaty of Greenville the following year, which reiterated the terms of the 1780s treaties. Following this, former members of the Western Confederacy, including the Miami leader Little Turtle, pledged to work with the federal government. This suggests that their hostilities only ceased due to pragmatism: they had already exhausted resistance, and capitulation was their only real option. It is very likely that resentment still bubbled under the surface, despite the appearance of acceptance. Hostility could be overt, or concealed in compliance, but it certainly remained strong. Resentment of the ever-expanding frontier did not cease as the nineteenth century began. Another major example of Native American hostility came in the form of the war of 1812, which had its roots in an 1809 resistance movement and really began in 1811. The Shawnee nation led a new alliance to push back white expansion in the Northwest, which was spearheaded by brothers Tenskwatawa and Tecumseh. Tenskwatawa, also known as the Shawnee Prophet, claimed that the white Americans were evil and created by the Great Serpent. He asserted that the Indigenous nations should reject their influence and push the whites back, possibly even back to Europe. Tecumseh, who would be the military leader, undermined white rhetoric by retorting ‘’how can we have confidence in the white people? When Jesus Christ came upon the earth, you killed him and nailed him on a cross’’.[6] This indicates that many did not trust the ‘civilising’ plan, and suspected its ulterior motives: like Jesus, Native peoples could find themselves victim to both physical and societal violence. Despite the claims of William Henry Harrison, the governor of Indiana territory and later the President, that a rapidly expanding population was in desperate need of land, the Shawnees had ‘done their homework’ and scouted the land, according to Nicholas Guyatt.[7] Tecumseh asked Harrison why so much of the territory had no settler communities, stating ‘’you were placed here by Government to buy land when it was offer’d to you, but not to use persuasion and threats to obtain it’’.[8] The Shawnees and their allies launched their attack before dawn while American troops were camped in Prophetstown for negotiations. Despite their near victory and the heavy losses suffered by the US troops, the Americans eventually won this battle, as well as the broader War of 1812. Harrison set about destroying Native towns and burning crops, indiscriminately targeting towns where the Indigenous people had supported the ‘civilising’ plan. For example, he destroyed the town where Little Turtle was buried, enacting revenge on friend and foe alike. Jefferson wrote to Alexander von Humboldt that the Natives’ actions had left the US to ‘‘pursue them to extermination, or drive them to new seats beyond our reach’’.[9] Hostility from some Native Americans was so strong that it elicited a severe and indiscriminate reaction from the government, even against those who were more willing to accept the plan. As this suggests, not all Native Americans were necessarily hostile to American ‘civilising’ efforts. Sometimes, they worked with their plans and seemed to welcome the education that they offered. For example, Benjamin Hawkins, US agent to the Creeks, encouraged parents to send their daughters to schools to be tutored in cloth making as part of the plan to boost indigenous manufacture and self-sufficiency.[10] In 1804, Gideon Blackburn, a Presbyterian missionary, founded a school for Cherokee children in southeast Tennessee. Within a year, he delighted in showing the state governor their ‘progress’: ‘‘twenty five little savages of the forest’’ now sat ‘‘neatly dressed in homespun cotton’’.[11] Their parents were clearly not altogether against them receiving a European American education. Likewise, in 1825 the Choctaw Academy was founded as a collaboration between the Choctaw nation and the federal government, to provide schooling for the Choctaw (amongst other nations) youth. Native Americans would also, on occasion, attend American colleges: the Cherokee George Morgan White Eyes attended Princeton.[12] Nevertheless, it should not be assumed that Native Americans went along with these examples of the ‘civilising’ plan with the aim to abandon their cultural heritage, as many whites hoped they would. Christina Snyder argues that at these schools, ‘students added or adapted new knowledge to their own deep intellectual traditions’.[13] Whilst they may not have been completely hostile to specific parts of the plan, they did not wish to become white people, as many whites hoped they eventually would. They might be open to new knowledge or practices but they would resist attempts to obliterate their way of life. Attitudes to the plan were therefore deeply complex and could not always be categorised into binary opposition or acceptance. As well as working with the educational aspect of the ‘civilisation’ plan, some Native Americans were willing to go along with the agricultural and social aspects. In 1789, the Cherokees asked Washington to honour his promise to send them an agent, declaring ‘‘let there be a good man appointed, and war will never happen between us’’.[14] There was clearly an attempt at peaceful coexistence: there was perhaps a recognition that as the Americans seemed to be here to stay, strong hostility was futile. The enterprises that were encouraged by these US agents often yielded promising results, suggesting that the indigenous people were on board with the plans. Four years after Benjamin Hawkins began his residency with the Creeks, he reported that Creek women and girls had made enough cloth for 300 people to wear, and that there was sufficient surplus to barter for livestock.[15] In 1802, he reported to Congress that the Creeks were raising cattle, sheep and horses, while similar ‘progress’ could be seen amongst the Cherokees, Chickasaws and Choctaws. Native delegates themselves were making formal requests to Congress agricultural equipment and supplies, and had spent $1000 on axes and hoes, instead of on ‘‘rum and geegaws’’.[16] Similarly, the Quakers, who were heavily involved in the ‘civilisation’ plan from a missionary angle, reported that 3,000 bushels of corn had been sold by the Shawnees in 1813 as a result of their agricultural reforms.[17] As policy makers and agents had hoped, Indigenous people were taking initiative and their efforts were being rewarded by tangible results. This indicates that sometimes, the Native Americans complied with ‘civilising’ efforts, and seemed determined to make the most out of the opportunities they were being given. In his analysis of the Creeks’ reactions to the ‘civilisation’ plan, Ethridge notes that the Creek women were its strongest advocates and held Hawkins in high regard, which was mutual. Hawkins included them in councils and invited them to dinners, even supporting the idea of intermarriage, at least for a while, hoping to find a wife for himself amongst the Creeks. Not only did interest in the plan depend on the nation, but it could also be viewed differently according to groups within this nation. In conclusion, different Native nations responded to the US’s ‘civilising’ efforts with different degrees of hostility, or on occasion, acceptance. Sometimes, they would express their grievances on the battlefield, as in the early 1790s and in the War of 1812, whilst sometimes, they would appear to put their animosity aside; whether this was because they could realistically do little else, or because they genuinely welcomed attempts to bring them ‘civilisation’, is debatable and depends on each particular circumstance. Whilst some were open to the ‘civilisation’ plan and actively engaged in its implementation, it seems that no Native American nation was fully supportive of the plan; how could they be, if at its core it meant the eradication of their sovereignty and cultural identities? They may have appreciated some cross-cultural aspects of the plan, but usually, they aimed to combine these with their own traditions and structures. Ultimately, as Colin G. Calloway writes, ‘this was their homeland, but they had no desire to become part of the new nation that was being built on it’.[18] Whether they resisted or welcomed parts of the ‘civilising’ plan, they had no intention of losing their identities. Chantelle Lee wrote this essay while in her final year of a BA in History at Cambridge University (Sidney Sussex College). She has now graduated from Oxford University (Mansfield College) with an MSt in US History. Title when assigned: How hostile were Native American societies to US 'civilising' efforts between 1783 and 1830? Notes: [1] Nicholas Guyatt, Bind Us Apart: How Enlightened Americans Invented Racial Segregation (New York, 2016), p. 144. [2] Robbie Franklyn Ethridge, Creek Country: The Creek Indians and their World, 1796-1816 (Chapel Hill, 2003), p. 15. [3] Guyatt, Bind Us Apart, p. 93. [4] Podcast: Ben Franklin’s world, episode 29: Colin Calloway, ‘The Victory with No Name’, https://www.benfranklinsworld.com/029/. [5] Guyatt, Bind Us Apart, p .91. [6] Ibid., p. 105. [7] Ibid., p. 104. [8] Ibid. [9] Ibid., p. 108. [10] Ethridge, Creek Country, p. 192. [11] Ibid., p.93. [12] Colin G. Calloway, The Indian World of George Washington: The First President, the First Americans, and the Birth of a Nation (New York, 2018), p. 342. [13] Christina Snyder, ‘The Rise and Fall of Civilizations: Indian Intellectual Culture during the Removal Era,’ Journal of American History, Vol. 104, No. 2 (September 2017), p. 390. [14] Calloway, The Indian World of George Washington, p. 337. [15] Ethridge, Creek Country, p. 192. [16] Guyatt. Bind Us Apart, p. 92. [17] Lori J. Daggar, ‘The Mission Complex: Economic Development, ‘Civilization’ and Empire in the Early Republic’, Journal of the Early Republic, Vol. 36, No. 3 (Fall 2016), p. 480. [18] Calloway, The Indian World of George Washington, p. 325.
- Arab Strategy in the 1948 War
On the 29th November 1947, the UN General Assembly voted to partition Palestine. Following decades of struggle between its Jewish and Arab populations, plans for vague borders, non-defensible boundaries and a lack of continuous territory, brought with them an impetus towards violence from the offset.[1] In examination of its three phases, this essay offers a strategic analysis of Arab activity in the 1948 war that followed. Rejecting the traditional Zionist narrative of a battle between the Jewish David and Arab Goliath, attention will be drawn to the work of Israeli revisionists like Avi Shlaim in demonstrating the inaccuracy of the notion of a ‘monolithic’ Arab force.[2] Focusing on aims, strategy and tactics, this essay seeks to demonstrate two important points. First, that the conflicting national interests of Arab states means that the 1948 war might just as accurately be considered an inter-Arab conflict as one between a single united entity and the state of Israel. Second, and perhaps more significantly, throughout every phase of the war, it was the Palestinians that suffered most acutely. Not only did the initial civil conflict mark the loss of their homeland, their hopes for salvation with wider Arab intervention in May 1948 failed to come to fruition. While Israeli and Arab scholarship tend towards the terms ‘The War of Independence’ and ‘The 1948 Palestine War’, for the roughly 800,000 Palestinian refugees it created, ‘al-Nakba’ is the most accurate description available. Phase 1: November 1947 – May 1948 The war began as a civil, inter-communal conflict between Palestinian and Jewish populations. With both parties triggered into action by the partition resolution, the months that followed are characterised by a period of fierce, bitter combat between two communities at war over what both believed to be their rightful homeland. It is on the basis of this conviction that Palestinian aims for the conflict were built. Simply put, ordinary Palestinian citizens sought to prevent the establishment of an Israeli state, ensuring the safety of their people. By actively combatting the decisions of the UN General Assembly, they aimed to prevent encroachment and ensure the Jewish population remained a minority, both in number and land ownership. For the Palestinian Arabs involved in the war’s earliest phase, this was a matter of survival. While the term ‘strategy’ might appear something of a misnomer when discussing military activities at this stage, even with limited weaponry (arms had been confiscated by the British during the 1936-39 Revolt) and a weak societal structure,[3] Palestinians had begun to engage. From a military perspective, the roughly 7000-strong Arab force (consisting of Palestinian citizens and volunteers from the Arab Liberation Army led by Abd al-Qadir al-Husayni),[4] implemented a strategy based on intercommunal violence and limitation of the Jewish population. The rudimentary military capabilities of the Palestinian forces meant their strategy translated into guerrilla-style tactics targeting Jewish areas of cities with mixed populations.[5] Lacking the capacity for large-scale offensives on military bases and the like, ordinary Palestinian citizens instead took to the streets to launch close-contact attacks on their Jewish neighbours. Owing to their better-equipped nature, the tactics of the ALA instead focused on implementing their strategy of limiting Zionist forces through blockading and isolation, targeting main roads around the city of Jerusalem and cutting off supplies to the 100,000 Jewish residents inside.[6] From a political perspective, the Palestinian strategy was more nuanced. Characterised by the creation of institutions and efforts to tarnish Israel’s image to target wider Arab sympathies, Palestinians began to take political steps towards ensuring their long-term success. Their first tactic involved the establishment of committees. Following the partition resolution, citizens in cities such as Lydda and Tiberias had started to organise in preparation to defend their homeland. From Local Security Committees designed to guard against Jewish thieves, to more localised Neighbourhood Committees responsible for patrols and barricades, Palestinian communities created organisational bodies to bolster their position in the conflict.[7] The most significant tactic of their political strategy, however, involved the use of the press. On April 9th1948, 130 fighters from the far-right Zionist groups Irgun and Lehi murdered roughly 107 Palestinian Arabs in what would become known as the Deir Yassin massacre.[8] Beyond an example of Israeli violence, the significance of this event lies in the way it was portrayed by Palestinian media. In broadcasts devoted to exaggerating the brutality (including vivid accounts of ‘atrocity’ and ‘rape’),[9] Palestinians were successful in capturing Arab sympathies across the Middle East, effectively implementing their political strategy of garnering support in the hopes of sparking intervention. The problem lay in the propaganda’s impact on Palestinian morale. As citizens were greeted with unflinching accounts of Zionist brutality, fear spread. By the time Arab states finally invaded Israel in May 1948, 200,000 Palestinians had already fled, leaving their limited forces weaker than ever before.[10] A Turning Point – Israeli Declaration of Independence: May 14th 1948 On May 14th 1948, the British Mandate ended and Israel declared its independence. The following day, Egypt, Transjordan, Syria, and Iraq invaded, and the civil conflict was transformed into an interstate war. While it was Israel’s new status that sparked this shift in dynamics, the motivations behind Arab invasion are numerous. The events of Deir Yassin had sparked a turning point in Arab sentiment. With King Abdullah’s announcement promising ‘terrible consequences’ if similar incidents were to occur,[11] and growing public demand for action within neighbouring Arab states, Zionist violence could no longer be tolerated. Consider this alongside Israel’s increasingly offensive strategy (it seemed that with the implementation of Plan Dalet, without intervention the Haganah’s expansion would only continue),[12] and the progressively desperate situation of the Palestinian forces, and Israel’s declaration of independence is best understood as the final trigger for an invasion that had long been imminent. Phase 2 - Interstate Conflict: May 15th 1948 – March 1949 While the invasion of regular armies from Egypt, Transjordan, Syria, Lebanon and Iraq (with additional contingents from Saudi Arabia and Algeria) did indeed transform the war into an interstate conflict, clarification must be made regarding the accuracy of these phases from a Palestinian perspective. On the aims and strategy of the ordinary Palestinian fighter, wider Arab intervention had little impact. Still seeking to reduce the number of Jews in Palestine and now resist the newly established Zionist state, their conflict remained one grounded in intercommunal fighting for home and survival. While Palestinian forces were united in their aims, the same cannot be said for the Arab states. On the 15th May 1948, the Arab League addressed the United Nations in a cablegram outlining the logic behind their invasion. Citing principles like the Palestinian ‘right to set up a Government’ and the need to re-establish ‘peace and order’,[13] their stated aims appeared morally aligned with their Palestinian counterparts. However, while they shared a rhetorical commitment to the liberation of Palestine, and most were united in plans to oppose the new state of Israel, the reality of Arab aims was quite different. King Abdullah saw the 1948 war as an opportunity to increase his territory via the annexation of the Arab parts of Palestine. Hoping to increase his standing in the region, it was national selfishness, as opposed to a genuine dedication to a Palestinian state, that motivated him.[14] Acutely aware of Transjordan’s territorial aspirations, both Syria and Egypt entered the conflict with the intention of checking their neighbour’s ambitions. King Farouk and President Shukri al-Quwwatli were concerned at not only the prospect of a Jordanian-controlled Arab Palestine, but the possibility of Abdullah using this success to realise wider ambitions of a Greater Syria. Acting against the advice of their own governments, Syria and Egypt’s most important aim was thus maintaining the balance of inter-Arab politics in order to prevent encroachment on their own territory and authority.[15] While both Lebanon and Iraq are often relegated to the footnotes in discussions of the Arab-Israeli war (their military capacity and subsequent involvement in the conflict was limited), they too recognised the need to temper Abdullah’s ambitions.[16] It is with this disparity in mind that attention must be drawn to the inaccuracy of the traditional Zionist David and Goliath narrative. In sharp contrast to the notion of a monolithic Arab entity, the 20-25,000 strong Arab force was both deeply divided in its ambitions and vastly different in its capabilities.[17] As the conflict continued, not only did Palestinian aims fall victim to the national interests of their stronger Arab neighbours, the friction between states also became increasingly apparent. Although Arab armies did not openly engage in combat, their hostility and incompatible aims meant the 1948 war can just as accurately be described as an inter-Arab conflict.[18] The overarching Arab military strategy during this phase was based on the basic principle of divide and conquer. While Palestinian irregulars maintained their approach of localised intercommunal violence, the invading Arab states sought to overwhelm Israeli forces in order to establish a superior position for negotiations and expansion. From a tactical perspective, this meant a process of attacking, annexing and occupying Israeli settlements from different angles. The realisation of this overarching strategy, however, differed between states. Adopting a tactic of swift attacks on key Jewish settlements like Nirim, the Egyptian army was tasked with invading the southern front and the Negev.[19] Syria adopted a similar approach in the northern front. Occupying three distinct enclaves totalling 66.5km, the Syrian regular army targeted indefensible strips of land east of the Jordan River and Lake Tiberias that they knew could swiftly be obtained.[20] The same military tactics of targeted assaults and attempted occupation were employed by Lebanon and Iraq in their respective engagements at Malikiyya in June 1948 and the approach towards Netanya. The progress of these regular Arab armies, however, was somewhat minimal. While not entirely devoid of success (take Iraq’s victory at the Battle of Jenin as an example), in most cases their advances were limited. With the size of the Iraqi army forcing it to take on a defensive position soon after its success,[21] Lebanon’s small force preventing much involvement beyond their half-day battle at Malikiyya,[22] Quwwatli’s fear of leaving Syria weak to Abdullah’s ambitions lending itself to a policy of cautious deployment (only 2500 Syrian troops invaded),[23] and the Egyptian army’s limited experience as a parade ground troop,[24] these forces hardly represented a Goliath-like entity. A highly professional army led by well-trained officers, the Jordanian Arab Legion is perhaps the exception. Targeting the areas Abdullah hoped to annex in line with his aim of territorial expansion, it focused its efforts on the West Bank and East Jerusalem. Adopting tactics of occupation in key strategic positions like the Latrun Monastery and house-to-house fighting in Jerusalem, Commander Glubb Pasha demonstrated the Legion’s substantial military capabilities in fierce engagements with Zionist forces, deployment of extensive artillery and success in pushing Jewish citizens from the Arab areas of Jerusalem.[25] On the basis of this potential for success, the question arises as to the logic behind Arab defeat in the 1948 war. The answer lies in Transjordan’s political strategy. Motivated by his desire for expansion, King Abdullah adopted a tendency to play the field for selfish return. While the nature of Zionist-Hashemite collusion has been debated by historians, even the likes of Efraim Karsh who is quick to point out the superficial nature of these communications, does not deny their existence. In a secret meeting between King Abdullah and Golda Meir on the 17th November 1947, discussions were had regarding the division of Palestine between Israel and Transjordan following the end of the Mandate.[26] Although the reality of co-operation between the two states was unlikely, Transjordan’s self-interested tactics of colluding with the enemy laid the foundations for mutual restraint in the war that followed, betraying Palestinian hope for liberation and ensuring the Arab Legion’s aversion to any substantial engagement with Zionist forces.[27] Phase 3 - The Point of No Return: 11th June 1948 On the 11th June, the first ceasefire of the 1948 war was called by the UN Security Council. While the attempts at compromise led by Count Folke Bernadotte ultimately proved unsuccessful, this event marked an irreparable shift in power dynamics. Having used the pause in the conflict as an opportunity to regroup, Israel began to import weaponry from Europe. In spite of the limited capacity of the Arab regular armies, their various military successes up to this point meant that Zionist victory had not yet seemed an inevitability. By the time the second ceasefire was concluded, this was no longer the case. With Israel having improved their position so significantly that Zionist forces switched to an offensive military strategy, Arab armies were forced to abandon their wider aims and adopt defensive positions. By December 1948, Israel had seized Nazareth and most of Galilee and broken the Egyptian blockade in the Negev.[28] Although armistice agreements were not signed until March, the 11th June marked the beginning of the end for Arab forces. Assessment - The Implications of Arab Strategy The success of Arab aims in the 1948 war were mixed. With Zionist victory succeeding in legitimising the state and its borders increasing by 21%, hopes of opposing Israel and avoiding its domination of the region were dashed. While Syria and Egypt succeeded in preventing the creation of Abdullah’s Greater Syria, their attempts to halt Transjordanian expansion failed as the King emerged from the conflict with control of both the West Bank and the Gaza Strip. While this territorial victory is evidence of Transjordan’s success in achieving some of its national aims, the impact of Arab defeat left the state’s leadership in a vulnerable position. Although Abdullah had hoped his achievements would cement his standing, his assassination in 1951 demonstrates the consequences of his strategy.[29] As a result of their limited military capabilities, misguided expectations of an easy victory resulting from a lack of long-term planning (Egyptian Generals commented that invasion would be a ‘parade without any risks’),[30] and inability to work together, Arab success in 1948 was minimal and their strategy flawed. It was, however, the Palestinians that emerged from the conflict in the worst position of all. Having entered a war they were unprepared for in every way a nation could be unprepared,[31] their aims of preventing the creation of an Israeli state and ensuring Palestinian survival failed. While there is much debate over the reasoning behind the exodus that followed, regardless of its causes, Palestinians emerged from 1948 as a nation of refugees. Both a result of their own flawed strategy (their propaganda sparked fear and their military capacity was limited) and the self-interested approach of Arab states, having struggled to defend their position against a superior Israeli force since November 1947, the Palestinian population ultimately found itself without a home, with between 550,0000-800,000 of its citizens spread out across the Middle East.[32] The significance of this war cannot be overstated. Palestinian belief in their right to a homeland has provided an infinite catalyst for Middle Eastern conflict for decades. With the events of 1948 giving rise to both the notion of Palestinian nationalism and key ideological tenets such as muqawama (resistance) and ‘awada (return), the conflict sparked a determination amongst Palestinians to regain what they had lost.[33] As the rest of the Arab world continued to debate the solution to the refugee crisis, the question of a Palestinian state found its place at the heart of the Arab-Israeli conflict. From setting the precedent for Israel’s victim narrative, sparking a destabilising domino effect on Arab politics and leaving the Palestinian population as a nation without a home, the impact of the 1948 war can be felt to this very day. Harriet Solomon has recently graduated with an MA in Modern History from the London School of Economics. Notes: [1] Kirsten E. Schulze, The Arab-Israeli Conflict (London: Routledge, 2008), p. 17. [2] Ibid, p. 289. [3] Issa Khalaf, ‘The Effect of Socio-economic change on Arab Societal Collapse in Mandate Palestine’, International Journal of Middle East Studies, Vol. 29, No. 1 (1977), p. 94. [4] Benny Morris, The Birth of the Palestinian Refugee Problem Revisited (Cambridge: Cambridge University Press, 2004), p. 163. [5] Mustafa Abbasi, ‘The end of Arab Tiberias: The Arabs of Tiberias and the Battle for the City in 1948’, Journal for Palestine Studies, Vol. 37, No. 3 (2008), p. 18. [6] Morris, The Birth of the Palestinian Refugee Problem, p. 163. [7] Spiro Munayer, ‘The Fall of Lydda’, Journal for Palestine Studies, Vol. 27, No. 4 (1998), p. 83. [8] Benny Morris, ‘The Historiography of Deir Yassin’, Journal of Israeli History, Vol. 24, No. 1 (2005), p. 79. [9] Schulze, The Arab-Israeli Conflict, p. 21. [10] Ibid. [11] Benny Morris, 1948: A History of the First Arab-Israeli War, (New Haven: Yale University Press, 2008), pp. 126-128. [12] Walid Khalidi, ‘Plan Dalet: Masterplan for the conquest of Palestine’, Journal of Palestine Studies, Vol. 18, No. 1, Special Issue: Palestine 1948 (1988), p. 9. [13] United Nations Security Council, Cablegram Dated 15 May 1948 Addressed To The Secretary-General By The Secretary General Of The League Of Arab States, S/745, (1948), , [Accessed 01 March 2021]. [14] Avi Shlaim, ‘The Debate about 1948’, International Journal of Middle East Studies, Vol. 27, No. 3 (1995), p. 300. [15] Eugene Rogan and Avi Shlaim, The War for Palestine: Rewriting the History of 1948 (Cambridge: Cambridge University Press, 2007), p. 150 and pp. 177-178. [16] Ibid, p. 204. [17] Kirsten E. Schulze, ‘The 1948 War: The Battle over History’ in Joel Peters and David Newman, Israel-Palestine Handbook (London: Routledge, 2012), pp. 48-49. [18] Rogan and Shlaim, The War for Palestine, p. 198. [19] Morris, 1948: A History of the First Arab-Israeli War, p. 236. [20] Rogan and Shlaim, The War for Palestine, pp. 196-197. [21] Efraim Karsh, The Arab-Israeli Conflict: The Palestine War 1948 (London: Osprey, 2002), p. 60. [22] Rogan and Shlaim, The War for Palestine, p. 204. [23] Ibid, pp. 195-196. [24] Schulze, ‘The 1948 War: The Battle over History’, p. 49. [25] Karsh, The Arab-Israeli Conflict, p. 62. [26] Efraim Karsh, ‘The Collusion that never was: King Abdallah, the Jewish Agency and the Partition of Palestine’, Journal of Contemporary History, Vol. 34, No. 4 (1999), p. 570. [27] Schulze, ‘The 1948 War: The Battle over History’, p. 50. [28] Schulze, The Arab-Israeli Conflict, p. 18. [29] Ibid, pp. 20-21. [30] Morris, 1948: A History of the First Arab-Israeli War, p. 185. [31] David Tal, War in Palestine, 1948: Israeli and Arab Strategy and Diplomacy (London: Routledge, 2004), p. 470. [32] Schulze, The Arab-Israeli Conflict, p. 21. [33] Sela Avraham and Alon Kadish, ‘Israeli and Palestinian Memories and Historical Narratives of the 1948 War – An Overview', Israel Studies, Vol. 21, No. V1 (2016), p. 6.
- Shah Abbas: Founder of Iranian Modernity or Upholder of Tradition?
Shah Abbas I, the ruler of the Safavid empire from the late sixteenth to the mid-seventeenth century, was accorded a legendary reputation by his contemporaries, with John Chardin even claiming that after his death, “the prosperity of Persia ended likewise.”[1] Much of Abbas’s historiographical tradition largely adopts this viewpoint, and only relatively recently has there has been a shift away from this misleading, exaggerated paradigm toward a more critical examination of the Shah’s reign. Abbas’s reputation is, to a certain degree, supported by the events of his rule; he salvaged the Safavid project from the chaos that erupted after Tahmasp’s reign, and many of his reforms were quite successful. Some historians, such as Roger Savory, have interpreted these developments as signs of modernity, and present the Shah as the creator of a modernised Iranian nation.[2] This description is problematic, as it lacks nuance and disregards the fact that although Abbas restructured his empire and endowed it with some novel traits in the areas of military, foreign relations, and governance, the polity retained many of its original, fundamental characteristics. Most significantly, he was unable to find a solution to the problems that most originally nomadic dynasties located in that region had faced; the struggle to permanently move away from traditional Turko-Mongol methods of governance, and the opposing forces of the nomadic and sedentary populations. Ultimately, Abbas provided the Safavids with a well-controlled, slightly modified continuity with the past, and his reforms and achievements were not radical enough for him to be deemed the legitimate founder of modern Iran. Before considering Abbas’s impact on the Safavid empire, it is important to address the general criteria by which a modern state should be defined. While it is difficult to say exactly what indicates modernity, there are several key components to consider when attempting to determine whether a nation is modernised. In their respective studies of the Safavids, both Andrew Newman and Roger Savory assert that modernity means complete central control over the military and possession of a standing army, a shared national identity among all citizens, fixed borders, and above all, a highly centralised administration.[3] Strong diplomatic and trade relations with foreign countries, and in the case of the Safavids specifically, a resolution to the constant tension between nomadic and sedentary styles of rule, also would point toward modernisation. These elements can be observed to some degree within Abbas’s empire, but the Shah failed to modernise his state because he did not change enough during his reign. Abbas’s military reforms were extensive, and at first glance, appear to have modernised the army. The way soldiers were paid was altered to make reporting for duty more appealing.[4] The Shah increased the number of soldiers in the ghulam corps, creating a force equivalent to a modern standing army.[5] In addition, he updated the weaponry used by the militia, mainly to counter that of the increasingly powerful Ottomans. Carmelite visitors to the Safavid region describe the Shah’s interest in military matters, stating that he had “a great liking for warfare and weapons” and “introduced into his militia the use of and esteem for arquebuses and muskets, in which they are very practiced.”[6] The English traveller Thomas Herbert also mentions weaponry in his chronicle; “they know well how to use the bow, dart, scimitar, gun, and javelin. Their arquebuses… they use very well, but detest the trouble of cannon and such pieces as require carriage.”[7] These quotes imply that it was Abbas who introduced technologically advanced weapons into Safavid warfare, and that the soldiers used them frequently and adeptly. Both accounts, however, were written by foreigners; it is likely that the Shah wanted to impress them with his army’s prowess, and consequently ensured that the travellers only saw what he wished them to see. This explains the rather inaccurate descriptions presented in these texts. Abbas was not, as is commonly believed, the first Safavid leader to promote the use of modern armaments. Ismail began to reform the military after his defeat at Chaldiran; he asked Italy for assistance in acquiring weaponry and knowledge of contemporary war tactics, resulting in a stronger armed force with an increased number of musketeers.[8] Abbas, along with Robert and Anthony Shirley, was also not responsible for updating Safavid artillery, and the equipment used by Safavid troops in the seventeenth century was outdated. For example, Iskandar Beg Munshi writes that the Safavids used “two huge siege guns firing shots weighing thirty Tabriz mann” in the 1605 Battle of Sufiyan against the Ottomans.[9] This may appear to be a sign of modernisation, but by this time, such artillery was effectively obsolete.[10] Moreover, many members of the Safavid army were reluctant to use the new weaponry, as it did not suit traditional warfare practices and, in the case of larger firearms, was often cumbersome.[11] The use of modern artillery in battle was still quite rare during Abbas’s reign, and the military was not particularly experienced in its use. In most contemporary accounts, the Safavid army is presented as updated and organised, but these descriptions do not reflect reality. Abbas did attempt to reform along modern lines, but apart from his expansion of the standing ghulam army, these changes were not extensive enough to consider the military modernised. The Safavid empire’s contact with foreign nations increased under Abbas, both in terms of military alliances and commercial relations. Prior to the Shah’s rule, Ismail and Tahmasp sustained contact with Venice, and much of the correspondence between the two regimes demonstrate a mutual desire to form a pact against the Ottomans.[12] A precedent for European-Safavid cooperation was set by these two leaders, but their international networks were quite small and insignificant compared to those of Abbas, who had a much broader understanding of world politics and was therefore able to foster both diplomatic and trade connections with an increased number of countries.[13] From 1608, “contact between Persia and Europe was joined far more consistently than ever in the past” and many foreign rulers sent ambassadors to the Safavid court to cement ties with the empire, largely due to mutual concern about the rising power of the Ottomans.[14] The letters of Robert Shirley, Abbas and Clement VIII compiled in the Carmelite Chronicle further illustrate a desire for cooperation and the establishment of greater links between European powers and the Shah.[15] Abbas opened his empire to international contact far more than his predecessors, and many nations wanted to form military alliances with the Safavids, largely precipitated by a desire to check the growth of the Ottoman empire. The development of stronger international diplomatic ties seems to suggest that the Abbas’s empire was beginning to enter the modern world order. However, while the Shah did manage to place his empire closer to modernity by linking it more firmly to the foreign community, he was unable to fully integrate the Safavids into global politics, and in large part, Europeans developed ties with the region only because of the shared fear of Ottoman expansion. In addition to diplomatic relations, Abbas also connected the Safavids to foreign states through commerce. He successfully expanded the lucrative silk trade and created a royal monopoly on the commodity to give the crown more authority over the economy.[16] He brought Armenians, who were adept at business, into Isfahan, and soon a class of successful merchants who possessed extensive knowledge of international trade routes developed.[17] Abbas also launched a project to renovate roads; for example, there was much roadwork in 1622 in Mazandaran to encourage both domestic and international trade.[18] He also created many caravanserais and the sang-farsh, a paved road between Ardistan and Firuzkhuh that made Isfahan the centre of internal commerce.[19] All of these developments made it easier for merchants and artisans to travel throughout his empire and engage with Safavid commercial activity. Some historians, such as Bert Fragner, have argued that by the sixteenth and seventeenth centuries, it was too late for Abbas’s empire to participate fully in international trade, which was “running along new tracks” by this time, causing Iran to find itself “pushed onto the fringe of the world economy” and unable to partake in the global market.[20] To a certain extent, this is a reductionist view; commercial connections between European powers and the Safavids were important during the seventeenth century, mostly due to the empire’s location along active trade routes in Asia. Europeans were interested in participating in trade with Iran mainly because of its high-quality silk, but most simply wanted to use it as a transit region. Commodities “went through the country but didn’t belong to it.”[21]Essentially, Abbas increased interest in the Safavid empire as a trading partner, but failed to modernise the region’s commercial system enough to participate fully on a global level. The empire, while not entirely relegated to the outskirts as Fragner argued, was not considered a large enough independent trader to be integrated into the international network, and was used primarily as a transfer zone for commodities being exchanged between larger powers. Abbas had not modernised but only somewhat strengthened the empire’s global commercial position. He simply improved relations with other countries after the period of confusion and chaos following Tahmasp’s reign where new connections were unable to be established and existing ones were not well-maintained. In addition to forming ties with European countries, Abbas attempted to develop a shared national identity to offset the Ottoman threat, but this feeling was present in the empire at such a negligible level that it cannot be considered modern nationalism. Shi’ism was used as a governmental tool after the second Safavid civil war of 1576 to 1590 to strengthen the centre’s legitimacy. Abbas presented himself as pious Shi’ite leader to garner support from a wider population, since after he subdued the civil war, he was unable to appeal to the Qizilbash as the leader of the Safavid Sufi religious order due to both the events of the previous years and the increased number of conversions to Shi’ism.[22] The use of religion to justify the shah’s right to rule was naturally not a modern concept, but some historians have argued that in addition to being employed as a method of legitimisation, the introduction and propagation of Shi’ism, and subsequent widespread conversion, contributed to the creation of nationalistic sentiments. There was already a developing idea of nationhood during the mid-sixteenth century; for example, Munshi uses the terms “mulk-i Iran” and “mamalik-i Iran” to describe the empire at the time of Tahmasp.[23] Abbas furthered this idea by attempting to use Shi’ism to promote allegiance to the nation in hopes that it would unite the population against Sunni Ottoman and Uzbek enemies, and mitigate the effects of strong ethnic and Qizilbash ties.[24] For example, the Shah cited religion when encouraging troops to report for duty, ordering that when men were called, “they should report without delay out of zeal for their faith…”[25] Though Abbas did make an effort to create identification with a larger national project, the “nationalism” generated through shared religion was not pervasive enough to be considered modern. Ultimately, tribal and ethnic loyalties were regarded by many of the governmental elites and members of the general population as more important, and remained prevalent despite Abbas’s efforts to consolidate his empire’s inhabitants through religious identity. In matters of government, Abbas managed to reform, but again failed to modernise. An examination of the legitimisation methods employed by the Shah shows a measure of continuity with those used by previous Turko-Mongol leaders of the fifteenth century. In addition to religion and the concept of divine kingship, the Shah used another traditional technique to establish authority; emphasising the empire’s Turko-Mongol heritage and endeavouring to create a strong connection between the Safavids and the Timurids through a plethora of references to Timur in contemporary literature. Qazi Ahmad’s Khulasat al-tawdrikh exemplifies Abbas’s keen interest in stressing Timurid dynastic ties.[26] In an interpretation of one of Shaykh Safi al-Din’s dreams predicting Ismail’s rule, Ahmad changes Amir Mahmud’s original description of the first Safavid ruler as “king” to Timur’s title “the lord of the fortunate conjunction,” thereby linking the first Safavid shah and the great Turko-Mongol leader.[27] In addition, many anecdotes about meetings between early Safaviyya shaykhs and Timur were formulated during Abbas’s time, with one even claiming that Timur predicted the rise of the Safavids during the fifteenth century after visiting the order in Ardabil.[28] Legitimising stories such as these were also circulated to foreign nations to give weight to the dynasty’s claim to the throne in the eyes of other rulers. For example, a waqf document that is considered to have been forged in Abbas’s court shows an endowment from Timur to the Safavids, and was sent to the Mughal emperor Jahangir to emphasise the strong historical ties between these two dynasties.[29] By using the widely recognised and revered figure of Timur to justify his reign both internally and internationally, the Shah was continuing to utilise a legitimisation device that other rulers, such as Babur, relied on long before he came to power. According to Munshi’s History of Shah Abbas, the king was “responsible for some weighty legislation in the field of administration.”[30] This quote is one of the few in his text that does not overstate the Shah’s reforms; Abbas’s centralisation of the Safavid government despite the strength of relatively autonomous Qizilbash tribal leaders was indeed an achievement. He improved on existing theories of governance and created a stabler Safavid empire, with his two most influential reforms being the khassa policy and his approach to the issue of the Qizilbash. The khassa policy was just one of the ways in which the Shah tried to assert his power in the peripheries and at the same time increase royal sources of revenue. Abbas converted mamalik lands, which were previously allotted to Qizilbash amirs, into khassa, or crown lands, and he appointed loyal viziers as governors instead of the Qizilbash in an effort to bring more areas under central influence.[31] This conversion, while certainly centralising, contributed in part to the decline of the empire and cannot be considered an example of modernity. The forms of taxation employed by the viziers were harsher than those of the original amirs, which caused agitation in the peripheries and contributed to the destabilisation of the Safavid state.[32] Abbas, and subsequent shahs, showed a distinct lack of interest in this exploitation and were willing to let it continue unchecked as long as the provincial governors provided sufficient funds for the centre. Khassa conversions were a superficial method of demonstrating economic improvement and governmental reform, without making changes to the fundamental semi-autonomous provincial governance system. The only noticeable difference was that Abbas’s appointed governors were mostly loyal to him and more eager to gain his favour than the Qizilbash, and would consequently send increased funds back to the Shah. So long as the viziers provided more money than the Qizilbash, the Shah hardly monitored his provinces. Abbas’s khassa reform is not an example of governmental modernity, but just another slightly more regulated form of feudalism, which was practised by the Safavids prior to his ascendance to the throne. Abbas’s approach to the Qizilbash tribes was a significant governmental break with the past, and it is the main factor that has led many historians to claim that he was the founder of a modern Iranian state. A major problem for leaders of empires originally established by nomadic tribes was the conflict that inevitably arose between these founding groups and the more centralised governmental style that many rulers eventually wished to establish. The Safavid shah was no exception; Abbas aimed to centralise the state, but Qizilbash nomadic tribes wished to retain their traditional ways of life and government. The Safavid empire emerged from a chaos of sultanates, all of which were attempting to gain any power they could after the fragmentation of the Timurid empire in the late fifteenth century. There was no central government or authority in most of these small territories until the Safavid empire was established in 1501. Due to the influence of the Turkoman Qizilbash force that brought Ismail to power, this original empire “shared Central Asian Turkic political traditions and a vision of conquest rooted in Mongol aspirations of world empire.”[33] During the reigns of Ismail and Tahmasp, very little changed in terms of government. Both adhered to Turko-Mongol traditions, where political and military power belonged to the Turkoman tribal elite, in this case the Qizilbash, while bureaucracy was supported by the sedentary population of Tajiks.[34] Riza Yildirim argues that the early Safavid state was the last embodiment of this type of government, and that Ismail and his followers merely took over, without modification, Aqqoyunlu administrative structures.[35] These institutions remained generally unchanged until the seventeenth century, when Abbas sought to radically reform the Safavid government and began to reject the decentralised, Turko-Mongol model of governance in favour of a centrally controlled regime. He focused on “consolidating his rule within the boundaries of the Safavid empire” according to Munshi, rather than expansion.[36]Abbas eventually faced the same questions that all rulers of originally nomadic polities faced; how to occupy the Qizilbash tribes when frequent campaigns were no longer a priority, and how to create a modern, centralised state when most of these groups were unwilling to give up their autonomy and traditional nomadic practices. The introduction of the ghulams provided a solution to this issue, as they were able to counteract the influence of Qizilbash leaders by taking over many military and government positions. Abbas is often credited with the creation of the ghulam class, but it was Tahmasp who, after being manipulated for several years by Qizilbash leaders, first introduced the idea of a group of Georgian and Circassian soldiers loyal only to the king.[37] Abbas expanded on this concept, appointing “to the highest offices and to the emirate promising officers who owed their rise to him alone” to decrease the influence of the Turkoman tribes even further, thereby consolidating his control over the provinces.[38]The ghulams curtailed the monopoly that the Qizilbash had on military and political power, but despite the noticeable decline in Turkoman army and government officials during his reign, the key posts in the centre and the provinces were still held by them by the end of his rule in 1629.[39] Powerful nomadic chiefs continued to play a large role in the governance of the Safavid empire, and tribal loyalties in many peripheral areas were never eliminated. Abbas’s style of leadership is comparable to that of Timur; his system of governance worked quite smoothly while he was on the throne, but fragmentation and destabilisation occurred without him controlling it. This was illustrated in the late seventeenth and eighteenth centuries, when noticeably weaker shahs struggled to retain Qizilbash loyalty. Abbas’s attempts to marginalise the Qizilbash were decidedly progressive and represented a marked shift away from traditional nomadic governmental practices, but he was unable to fully execute this change and cannot be said to have modernised Iran. Overall, Shah Abbas’s governance style should be viewed as a continuation of the cyclical governmental history of the Iranian plateau and its surrounding territories. This region fluctuated between Irano-Islamic and Turko-Mongol regimes based on which population was able to gain the most power at a given time, and Abbas’s reign did not break this sequence. His administrative institutions showed remarkable continuity with early fifteenth-century Irano-Islamic traditions, which stressed the importance of a strong central administration and led to the reduction of the nomadic leaders’ power.[40] Shah Rukh was one of the most important figures who promoted this type of rule; he encouraged sedentarisation over nomadism, and attempted to create a unified society ruled by a leader who combined secular and religious authority.[41] Abbas and Shah Rukh’s governance styles were similar, and the Safavid shah can be considered to have partially revived of a tradition that was abandoned a century earlier due to increased nomadic activity following the collapse of the Timurids. Every empire that preceded the Safavids—the Seljuks, the Mongols, and the Timurids—was unable to end the cycle of a period of centralisation followed by a period of decentralisation and nomadic dominance.[42] The Safavids were no exception; Abbas was marginally more successful in resolving nomadic versus sedentary tensions, but his core policies were too similar to those of fifteenth-century empires to create any profound change, and therefore he was unable to effectively modernise. Shah Abbas consolidated Safavid rule and established a stable, prosperous empire during his reign. He reformed the army, linked the region to foreign countries more than it ever had been before, and reorganised several government institutions, such as land policy. Despite these many improvements, he cannot be considered the founder of modern Iran. To describe him as such is an unnuanced, incorrect description that perpetuates the traditional historiographical narrative of Abbas’s rule. As Hans Roemer says, the Shah was prepared to “cast aside the old customs of the order whenever it was in his interests to do so. On the other hand, he willingly obeyed and enforced them if it suited him.”[43] This is an accurate description of Abbas’s sixteenth and seventeenth-century reforms; the alterations he made strengthened his control over the empire and appeared progressive, but in fact, his reforms were largely superficial and did not radically change Safavid institutions and ideologies. Perhaps most importantly, he was unable to find a solution to the fundamental issue that affected all preceding Turko-Mongolian empires; nomadic-sedentary, or more specifically, Turk-Tajik differences that led to perpetual fluctuation between Irano-Islamic and Turko-Mongol forms of government. Formulating a solution to the problem of the centrifugal Qizilbash forces would have given the Safavids an opportunity to truly modernise the region. Due to Abbas’s failure to distance himself in any significant way from older styles of governance and to resolve the core tensions present in the empire, he cannot be considered to have founded modern Iran. Dorothy Green is in her 4th year of an MA in Middle Eastern Studies at the University of St. Andrews. Notes: [1] John Chardin, Travels in Persia 1673-77 (London, 1927), p. 188. [2] Iskandar Beg Munshi, History of Shah ‘Abbas the Great, trans. Roger Savory (Colorado, 1978), p. xxii. [3] Andrew Newman, Safavid Iran: Rebirth of a Persian Empire (London, 2006), p. 123. Munshi, History, p. xxii. [4] Munshi, History, p. 1142. [5] A Chronicle of the Carmelites in Persia Vol 1, trans. and ed. Herbert Chick (London, 2012), p. 161. [6] Ibid, p. 160. [7] Thomas Herbert, Some Years Travels into Divers Parts of Africa and Asia the Great (London, 1677), p. 243. [8] Sholeh A. Quinn, Shah Abbas: The King Who Refashioned Iran (London, 2015), p. 80. [9] Munshi, History, p. 843. [10] Colin Imber, ‘The Battle of Sufiyan, 1605: A Symptom of Ottoman Military Decline?’, in Willem Floor and Edmund Herzig (eds), Iran and the World in the Safavid Age (London, 2012), p. 93. [11] Quinn, Shah Abbas, p. 81. [12] Giorgio Rota, ‘Safavid Persia and its Diplomatic Relations with Venice’, in Willem Floor and Edmund Herzig (eds), Iran and the World in the Safavid Age (London, 2012), p. 150-151. [13] Munshi, History, p. 553. [14] Ibid, p. 191-193. [15] Carmelites, pp. 80-84. [16] Quinn, Shah Abbas, p. 111. [17] Chardin, Travels, p. 138. [18] Munshi, History, p. 1211-1212. [19] Bert Fragner, ‘Social and Internal Economic Affairs’, in Peter Jackson (ed), Cambridge History of Iran, Vol 6 (Cambridge, 2008), p. 527. [20] Ibid, p. 526. [21] Matthee, ‘Safavid Economy’, p. 43. [22] Newman, Safavid Iran, p. 56. [23] Roger Savory, ‘The Safavid Administrative System’, in Peter Jackson (ed), Cambridge History of Iran, Vol 6 (Cambridge, 2008), p. 352. [24] David Blow, Shah Abbas: The Ruthless King Who Became an Iranian Legend (London, 2009), p. 187. [25] Munshi, History, p. 525. [26] Sholeh A. Quinn, Historical Writing During the Reign of Shah Abbas: Ideology, Imitation, and Legitimacy in Safavid Chronicles (Salt Lake City, 2000), pp. 44, 65. [27] Quinn, Historical, p. 75. [28] Ibid, p. 89. [29] Lisa Balabanlilar, ‘Lords of the Auspicious Conjunction: Turco-Mongol Imperial Identity on the Subcontinent’, Journal of World History, 18:1 (March 2007), p. 5. [30] Ibid, p. 527. [31] Blow, Shah Abbas, p. 38. [32] Savory, ‘Administrative System,’ p. 366. [33] Balabanlilar, ‘Lords’, p. 1. [34] Yildirim, Riza, ‘The Rise of the Safavids as a Political Dynasty’, in Rudi Matthee (ed), The Safavid World (London, 2021), p. 59. [35] Ibid, p. 68-69. [36] Munshi, History, p. 615. [37] Savory, ‘Administrative System’, p. 362-363. [38] Munshi, History, p. 518. [39] Newman, Safavid Iran, p. 53. [40] Yildirim, ‘Rise of the Safavids’, p. 71. [41] Ibid, p. 60. [42] Savory, ‘Administrative System’, p. 371. [43] Hans R. Roemer, ‘The Safavid Period’, in Peter Jackson (ed), Cambridge History of Iran, Vol 6 (Cambridge, 2008), p. 263.
- The 'Globalisation' of the Hellenistic Age
As with most cultural processes in the ancient world, whether they be ethnic identities or cultural exchange and interconnectivity, the prevailing view in scholarship is one mainly deriving from the literary or historiographical evidence. Although archaeological evidence has been used to identify and analyse these processes, the dominating perspective has always had its foundational origins in the literary approaches of the preceding decades. With regards to the processes of cultural change and exchange in the Hellenistic Period, the prevailing trend is one deriving from a Greco-centric perspective which as a result advocates a scholarly presupposition of a dichotomy between the Hellenes and non-Hellenes when it comes to cultural contact, as well as a focus on the eastern Mediterranean due to its significance to the expanding Hellenic culture. Because of this the Greek elements are disproportionately represented when compared to the other cultures present in the Mediterranean at the time, and the western Mediterranean is assumed to fit to the cultural patterns of the east. Such a view comes from the use of literary sources, such as Plutarch, who see the campaigns of Alexander and the subsequent conquests as a kind of cultural crusade against the barbarians, as well as the entrenched notion within the field of Classics that it is the Greeks and Romans above all else who are worthy of our attention and interest. Although more contemporary attempts have been made to see how the conquered reacted and resisted culturally to the conquerors, including Gruen and La’da, who both sought to represent the minority perspectives within the context of Ptolemaic Egypt, little has been done to represent the peripheral cultures in the west not directly influenced by Hellenic conquests, as they were never deemed by traditional scholarship to be important enough to fit within their Greco-centric agenda.[1] This means that where there are independent studies into these peripheral peoples and their material culture, they are assumed to fit into the overarching cultural bloc of the Hellenistic sphere that has been propagated by scholars who as a result only aim to analyse the Greek aspects of influence. This results in a tendency to see the ‘less sophisticated’ native elements as “passive receptacles of cultural influence rather than as active manipulators of culture”.[2] Therefore, more nuanced indications of other external or internal influences in their material culture are ignored in favour of the Greek, and no attention is given to how some influences on material culture may have transferred in the opposite direction. The aim of this paper is to break down the dichotomy of Hellene and non-Hellene by representing the peripheral civilisations in the western Mediterranean who have thus far been demoted to a footnote in the history of the Hellenistic Period. This will be done through the analysis of the evidence provided by their material culture. I will establish the Mediterranean in this period to be a complex and varied basin for interconnectivity and cultural exchange that was not dominated by the cultural bloc of the Hellenistic world as traditionally propagated, as well as demonstrating that the peripheral peoples I will deal with used a range of external and internal influences on their material culture to create an individual identity that was not solely linked to the Greek east. Although ‘Hellenisation’ was a factor involved during this period, as well as preceding periods to a lesser extent, we must not consider it as the only process of cultural exchange occurring, as we must also consider other non-Greek influences as well as how peripheral cultures may have influenced the Greeks themselves. Where we do consider ‘Hellenisation’ and the Hellenic links with the peripheral peoples, we should see it as a relationship between Hellene and Celt, Iberian, Indian, Briton or Numidian, in order to represent their cultural significance and remove the generalising approach of considering only Hellene and non-Hellene. These civilisations were culturally linked in many ways, but also had varied influences and did not solely exist to provide a generalised cultural opposite to the Greeks. The first section of this paper will consist of a brief discussion on the nature of the current scholarship and how recent methodological developments can lead to nuanced interpretations of the available archaeological evidence. The second and third sections will be case studies of peripheral Mediterranean cultures that have thus been subjected to a ‘Hellenistic’ scholarly focus with regards to their material culture. The case studies will cover the Numidians and Iberians respectively, and each will take a different focus owing to the varied approaches and aspects of research that each area has received. For example, much of the scholarship into the material culture of Numidia that can be used to identify external influences has focused upon elite architectural evidence, such as the royal monuments that dot the Numidian landscape. On the other hand, the evidence from Iberia includes ceramics, metalworking, sculpture and more. The resulting difference means that we will be able to represent an independent elite in Numidia that has thus far been directly linked to Hellenistic monarchs, as well as also representing a wider cultural group in Iberia. It will be important with this to divide Hellenistic influences on elites, who sought to gain prestige from the adoption of foreign elements, and other aspects of influence on the wider material culture. These two aspects combined will provide two differing perspectives that will contribute to our overall understanding of the complexities of cultural change in the western Mediterranean and how different aspects of society reacted. The final chapter will bring together the new interpretations of the case studies and present a new approach to understanding interconnectivity and cultural exchange in the Hellenistic Period. Here I will show how the methodological developments that I intend to use can be applied to this period in order to reduce the limiting effects of the term ‘Hellenisation’, and in fact offer a new term to better describe this period of Mediterranean history. Overall, I will highlight the eclectic nature of material culture in the Mediterranean and demonstrate that any overbearing focus put on the significance of Hellenic culture beyond it facilitating the conquests of Alexander is unrepresentative of the complex realitities. I. The State of Current Scholarship For the literary origins of this Greco-centric perspective that prevails in traditional scholarship we must turn our attention to the historiography and oratorical works of antiquity, most notably the work of Isocrates and later Plutarch. In his Panegyricus as well as in some of his letters to Philip II, Isocrates identifies the wealth of the Achaemenid Empire as stagnant and requiring Greek manpower to make it work and transport it west; he claims this can be achieved through the establishment of Greek cities.[3] This would later inspire Plutarch in his Moralia to suggest that the conquests of Alexander and the subsequent establishment of around seventy cities were motivated by a desire to deliver Greek civilisation, literature and government to the non-Greek world.[4] Some contemporary reinterpretations of the literary sources, as well as some minimal archaeological and topographical analyses, have allowed an insight into some more pragmatic motivations for Alexander’s foundations, including facilitating further conquest, the protection of border zones and supply lines, or for economic importance.[5] Nevertheless, the original ‘Hellenising’ perspective originating with Isocrates and Plutarch has lay the conceptual framework of which scholars of classics, ancient history and even classical archaeology still unavoidably build from. In addition, it is Polybius who gives us the notion that before the expansion of Rome, the Mediterranean was divided between the Punic and Italian west, and the Greco-Asian east.[6] This ancient notion has also influenced the ways in which modern scholarship approaches cultural change in the Hellenistic Period, and there is no real evidence that the wider Mediterranean community saw it this way. That being said, there has always been an acknowledgement of the creolization involved in ‘Hellenisation’. For example, Droysen was influential in propagating the idea that the expanding Hellenic culture merged significantly with the native cultures of Egypt and the East; something he termed as Hellenismus.[7] Moreover, Droysen identified this period as one where the boundaries of East and West moved firmly into the Mediterranean. However, although this is a rightful acknowledgement of at least some elements of reverse cultural influence, the focus still remains based around the study of the Greek east and as a result only serves to further propagate the cultural dichotomy between east and west, as well as Hellene and non-Hellene, when considering the Mediterranean as a whole. Other peripheral cultures continued to be seen as passive elements in this process and are not given their due representation. The notion of cultural influence and exchange during this period remained based around the idea of the ‘Hellenic’ and the ‘other’, and as a result no real systematic or independent studies into the ‘other peoples’ involved were enough to influence this prevailing view. More to the issue at hand, this prevailing view of cultural blocs has influenced any attempts to research into the material culture of the peripheral peoples. Coarelli and Thébert, for example, in their research into the Numidian royal monuments at Thugga claim that they are influenced solely by an eastern Mediterranean architectural tradition that exploited artistic elements and techniques from the Greek world, and from this they draw conclusions about the political nature of the Numidian royals as being inexplicitly linked to Hellenistic monarchs.[8] Equally, with regards to the study of cultural change in Iberia the term ‘Hellenisation’ refers specifically to the Greek influences manifested in ceramics, sculpture, architecture metalwork and burial rights.[9] As a result, there is a habit among Spanish and Portuguese scholars of attributing any non-local elements of material culture to either Hellenic or Punic (and later Roman) influences.[10] However, more contemporary interpretations of the material evidence not as limited by the conceptual groundwork drawn from the literary sources and based more on other contextual comparisons have highlighted a different perspective of an array of cultural influences. Quinn, for example, highlights that there were intense levels of creolization in the Numidian monuments, including Libyan and Punic, Levantine, Egyptian, as well as Greek artistic influences.[11] Moreover, Keay argues that complex regional differences in Iberia can begin to explain a varied material culture between regions that has otherwise been ascribed to external influences.[12] What can be identified here is that for the most part interpretations of the material evidence, especially architecture, have always focused on the Greek elements as they were considered the most worthwhile factor to study; something which is unavoidably linked to the field of Classics being passionately indebted to the literary sources. That being said, there have been more contemporary attempts to limit the effects of this Greco-Roman focus, including the work of Versluys, and Quinn, to whom I am indebted to with regards to their revisionist models. By focusing solely on the material evidence and by relinquishing the models set by the traditional scholarship, we can begin to gain a better insight into the complexities of cultural change. This is the approach taken by Versluys, as well as some of his predecessors, who applied it to the Romanisation debate in order to break down the existing presupposed dichotomy.[13] Another important model to work from is that set by Quinn in her edited volume on the Hellenistic West. Quinn seeks to break down the notion of cultural blocs and limit the presupposed eastern Mediterranean cultural dominance that continues to affect our studies of the Mediterranean today, as well as highlight its varied and eclectic material culture. She argues that the separate cultural worlds of ‘Hellenistic’, as well as ‘Punic’, are modern monolithic constructs and that there is no evidence to suggest the ancients saw them that way.[14] Therefore by reinterpreting the material evidence, as well as interpreting new material evidence, through the lens of intense regional variability and cultural individuality, I hope to get one step closer to understanding the perspectives of the peoples that inhabited the Mediterranean in antiquity whilst removing the shackles of Greco-Roman cultural dominance that originates from centuries of modern scholarship. II. The Numidians As already mentioned with Coarelli and Thébert’s interpretations of the Numidian royal monuments, Numidia and its elite architectural culture has been seen as a passive element in a process of Hellenistic influence. As a result, this has led to interpretations of the political nature of the Numidian royals as being directly linked to Hellenistic monarchs in terms of displaying prestige through the use of foreign artistic techniques and forms.[15] Coarelli and Thébert identify such a tradition as originating with the sixth century tomb of Cyrus at Pasargadae, as well as being identifiable with the monument of the Nereids at Xanthos in the fifth century, the fourth century Mausoleum at Halicarnassus, the early third century Mausoleum of Belevi and the articulated tumulus in the Hellenistic Asklepeion at Pergamon.[16] The evidence they cite for such a conclusion of a dominating Hellenistic influence, from which they emphasise the direct contacts between the Numidian kings and the cities and rulers of the Hellenistic East, is the apparent local familiarity with Hellenistic iconographic ‘codes’, such as the significance of the diadem on royal coin portraits.[17] Other scholars in the same tradition, however, have argued for a solely Punic influence. Camps and Shaw for example argue that the Punic and Phoenician influences in the monuments suggest high levels of acculturation between Carthaginian and Numidian elites during the Hellenistic Period; likely driven by intermarriage between them.[18] Martinez too follows this trend by suggesting that the tower-form of the monuments is of Punic influence. Equally, Lancel, following Poinssot, claims that the Thugga monument is “the only great monument of Punic architecture still standing on Tunisian soil”.[19] Either side of this debate, however, still unconsciously advocates a cultural dichotomy, whether it be Greek or Punic. Moreover, they still propagate the notion of a cultural divide between east and west, where in the western cultures are passive receptacles to be merely influenced by Greek, or possibly Punic, cultural features manifested in artistic and architectural techniques and styles. Nonetheless, Lancel also admits that Egypto-Greek influences derive from a lost world of Punic architecture.[20] The comparisons he draws from are the stelai with Aeolic capitals, Ionic columns, cavetto cornices and winged solar disks in the tophets at Carthage, Hadrumetum and other Phoenician colonies in Sardinia and Sicily, as well as an Aeolic pilaster depicted on an architectural fragment from Medjez-el-Bab.[21] Lancel’s allusion is significant, as what is clear is that there was an architectural tradition in Numidia that drew on a range of styles, forms and techniques from around the Mediterranean. Quinn, for example, highlights that the Thugga monument has a bilingual Libyan and Punic inscription, as well as a first story of Levantine Aeolic capitals, which were widespread in Carthage and the surrounding region during the Hellenistic Period, a second story of Greek Ionic capitals, and above a cavetto cornice related to Dynastic Egyptian architecture that had long been popular in the Maghreb; the same cultural variety can be seen with Lancel’s examples.[22]Moreover, Quinn asserts that the monument has depictions similar to the iconography and coinage of Persian Sidon.[23] These varied influences are not indicative of a single static time where an individual or group decided to draw on foreign artistic influences, but instead represents a stage of a developing tradition that had been occurring for quite some time before the Thugga monument, which dates to the third or second century BC. This development is identifiable when we consider the mausoleum from the cemetery at Pozo Moro which dates to around five hundred BC and exhibits local features alongside clear foreign influences.[24] With a range of external influences from the western and eastern Mediterranean, from the Greek, Egyptian and Punic, we can begin to see how the traditional models of dichotomy between the presupposed cultural blocs is not sufficient to represent the process of cultural change during this period; at least with regards to elite architectural forms. Furthermore, evidence of strong local influences demonstrates that there was a significant element of active agency from the part of the Numidians involved, building from a long-held architectural tradition. Gsell, for example, who highlighted the possibility of local influences in the early 20th century, suggests that the articulated circular form of the Madracen near Batna builds from a much older style seen in the bazina tomb-type, which leads him to suggest that they were “indigenous monuments dressed in a cloak of foreign extraction”.[25] This is corroborated by Camps, and later Krandel-Ben Younès, who argue that the Madracen is a local form embellished by outside influences.[26] What is clear is that there was a local tradition that built on older Numidian forms, but also used architectural and artistic techniques from around the Mediterranean as forms of embellishment, likely as a source of prestige for the various elites and monarchs who sponsored their construction. Quinn highlights that the purpose for these foreign influences was not to align “their monuments with one or another ‘cultural tradition’”, but in order to point to “a variety of places and ideas that reinforced their local power, status and authority”.[27] Therefore, we must look at the Numidian royals not as a mere receptacle to serve in the presupposed dominating process of ‘Hellenisation’, but instead as part of an independent society that chose to adopt various techniques and styles from a range of other cultures. By emphasising the local elements and the variety of external influences, we can begin to represent the Numidian elites in their own right and work from their perspective. The east-west cultural divide begins to diminish when we consider the range of Punic, Egyptian, Levantine, local, and even Greek influences, alongside the prior assertions that the Numidians were ‘Hellenised’. There was clearly a significant level of cultural independence that allowed the Numidians to adopt foreign influences to facilitate their own motives. This is a stark contrast to the presupposed assumption that they were in some way forced to adopt Greek, or Punic, influences by a form of cultural dominance. Numidian contact with the Greek and Phoenician cultures would have been direct owing to the colonies on the North African coast. Law highlights that many of the Punic colonies “…served as victualling stations along the coasting routes to Spain and Egypt, but… also had an economic significance of their own as centres for fishing and as post for trade with the peoples of the interior”.[28] He goes on to highlight that smaller colonies further west of Carthage served as emporia and provided access to the commodities of the interior of Numidia and the Maghreb, mainly ivory, hides and cedar wood.[29] Although contact with the Phoenician colonies would have been more direct owing to proximity, and possibly served more of an economic motive, the contact with Greek colonies in Cyrenaica would have taken a similar in terms of cultural influence. We can identify the possibility of Punic influence in terms of technological innovations, including arboriculture and iron-smelting, as well as some cultural influences such as the Phoenician rite of child sacrifice in Thugga.[30] However, we must acknowledge that there was a significant level of cultural independence. We must also consider the fact that the Punic and Greek colonies is North Africa would have incorporated elements of the local population into their colonies, resulting is a more creolized material culture.[31] It would have been the nature of these contacts that facilitated the varied influences on the architectural culture of the monuments discussed. However, owing to a lack of ceramic and other sufficiently datable evidence, we can only conject on the actual extent of direct contacts. Quinn concludes by stating that “the builders bolstered their prestige by co-opting global references, and their legitimacy by co-opting local ones”.[32] III. The Iberians Iberia also has the same presupposed place in the process of Hellenisation, as well as being similar to Numidia in that they had direct cultural contacts with both the Greek and Carthaginian colonies nearby. However, as already mentioned, the evidence and thus the interpretations of the internal and external influences on Iberian material culture are based on a wider range of archaeological evidence, including coins, burial practices and urban architectural remains. As a result, we can infer more about the wider population rather than being limited to the elite. That being said, there are still some limitations in the availability of archaeological data, and Keay highlights that “for most towns… a full understanding of the interplays between local traditions and eastern Mediterranean influences is hampered by a lack of systematic fieldwork and publication…”.[33] Because of this I will not include an analysis of the ceramic evidence, mainly because of a lack of an accurate understanding of the assemblages, as well the fact that numismatic evidence and burial practices allow us a far more nuanced insight into the self-conceptualised identities of those that created and participated in them. This lack of material evidence is likely linked to the presupposed traditional views that are the subject of this paper. For example, the town of Arse-Saguntum (modern Sagunto in Valencia) features in various historiographical sources, including Dionysius of Halicarnassus and later Silius Italicus, who both incorporate it into Greco-Roman mythology, as well as Strabo, who claims that the Saguntines originated from Zakynthos.[34] This literary link between the Iberian cultural landscape and the Hellenic world cannot be ignored as providing the impetus for the traditional literary-based views that the Iberians were subject to the process of Hellenisation. Even if the origin of the Saguntines is true, and we do not necessarily have any reason to think otherwise, it should not be used as evidence for a vast wave of cultural changes across the peninsula, and we must instead consider the archaeological evidence. In reality, Iberia was in direct contact with the Greeks and Carthaginians, and later Romans, due to the presence of colonies as well as preceding maritime networks across the Mediterranean. As a result, we must account for the eclectic nature of the material remains and infer from them a much more complex picture of cultural change and influence. One key aspect of the material evidence which is increasingly informative of cultural influence and change are coins, which are a prominent feature of Iberia’s archaeological landscape owing to the Greek, Carthaginian and Roman settlements, as well as the ones of an Iberian nature. However, we do not find distinctly Greek coins at Greek colonies, nor distinctly Iberian coins at the local settlements, unless they have by chance travelled from Greece. Instead, the coins excavated at the various urban or rural sites across Iberia indicate a high level of creolization and cross acculturation. For example, the Iberians of Tarraco, as well as most towns in Citerior, were not directly influenced with regards to their coin depictions until the Augustan Period, when Latin inscriptions were included on their coins.[35] Instead, the Iberian elements in these settlements minted bronze and silver coins bearing homogenous depictions of a head of a youth and a galloping horseman, as well as bearing local inscriptions.[36] These local elements, however, like many coins minted by other Iberian and Celtiberian communities, “were localised interpretations of widespread Hellenistic-period imagery…”.[37]Some more inclined to the traditional scholarship on Hellenisation may cite these Hellenistic influences as evidence in favour of their view. However, the regional variability of how the Iberians adopted various external influences demonstrates that any single homogenous process such as Hellenisation will not suffice to represent the complex realities. Keay highlights that “communities in south-west, south-east and eastern Iberia, as well as on the Balearic Islands, developed their own regional dynamics that drew differently upon continued Greek influences as well as those from a growing Punic presence in the south-western Mediterranean”.[38] What is clear is that the varied regional nature of the Iberian communities meant that they could draw on Greek influences in varying degrees and as they saw fit. Moreover, the Iberian communities that drew from Greek and other cultural influences tended to be located on the Mediterranean coast and the nearby interior. The hill communities deeper into the peninsula did not see the same level of cultural change. They were not, therefore, serving as passive elements in an overbearing and homogenous cultural process originating in the eastern Mediterranean. Instead, some communities were amalgamating Hellenic and Punic cultural features alongside their own local traditions likely to participate in this growing pan-cultural community in southern and eastern Iberia. Moreover, cultural creolization was not limited to the Iberian communities, and we can identify certain aspects of cultural exchange in the Greek colonies in Iberia that cast doubts on the presupposed monodirectional nature of traditional Hellenisation. Emporion was a Greek colony founded in the sixth century in what is now Catalonia. There was a period between the late third century and the early second century when the predominant symbols on their coinage were typically Greek, with depictions of Artemis and a winged Pegasus.[39] However, preceding this was a period between the late fourth and late third centuries BC when the coins took a distinctive Carthaginian style.[40] Furthermore, they were eventually replaced by bronze coins with Iberian inscriptions which lasted well into the first century, although the Greek winged Pegasus continued to be depicted.[41] What is clear is that at different stages an intense level of cultural exchange can be identified, whether it be Greek influences on Iberian coins, or Iberian and Carthaginian influences on the Greek examples. This combined with the regional variability of the Iberian communities demonstrates the eclectic nature of the period and should begin to dissuade any real attempts to propagate the term ‘Hellenisation’ in its traditional form. In addition, the clearly varied interplay between the identifiable cultures demonstrates that the dichotomy between Hellene and non-Hellene is no longer useful when we consider how these communities actually interacted. It is likely that these settlements included Greek, Punic and Iberian elements that fosterd their own collective identity; something shown through their coins. Therefore, such mixed communities can not fit into the dichotomy propagated by traditional Hellenisation. A similar interplay between Greek and Iberian cultural forms can also be seen with the burial practices in the cemeteries on the outskirts of ancient Emporion from the early second century BC onwards.[42] This, like the conclusions drawn from the numismatic evidence, demonstrates the creolized nature of the Greek colony and is testament to either direct influence from neighbouring Iberians, or at least the adoption of some Iberian elements into the population which would in turn lead to cultural influences. Moreover, with regards to Iberian settlements we can also identify a variety of influences on burial practices at different times that demonstrate more beyond the Hellenistic influences. For example, the second and first-century burials at the cemeteries of El Tolmo de Minateda in Albacete and at Vilajoyosa in Alicante show possible Roman influences in that the cemeteries were arranged along roadways.[43] Equally, other regional Iberian influences can be detected in southern Catalunya and northern Pais Valenciano where the stelai include Iberian inscriptions and resemble similar monuments from lower Aragon.[44] This is again testament to the true eclectic nature of the material culture and shows that by analysing such evidence over the literary sources we can begin to remove the Greco-centric shackles that have inaccurately shaped our understanding of cultural exchange in Iberia, as well as the wider western Mediterranean. Iberia had been subject to Hellenic influences for some centuries before the establishment of the Hellenistic kingdoms in the east.[45] Keay highlights that “Phoenician and Greek colonies in the south and north-east were instrumental in the circulation of a complex blend of eastern Mediterranean ideas and imports among indigenous communities from at least the seventh century BC onwards, which together with pre-existing indigenous Late Bronze Age traditions, formed the cultural context out of which the Iberians were to develop in the course of the sixth and fifth centuries BC onwards”.[46] This amalgamated cultural identity was to reach its height during the Hellenistic Period with regards to a scholarly perspective due to all of the cultural influences identified above becoming clear within the material record. However, with Iberia we must continue to consider this longer development whilst also focusing on the native elements that persisted. As such, we should not see the Iberians as being a ‘Hellenised’ culture that was somehow subject to renewed Greek cultural influences in the Hellenistic Period. As demonstrated, despite various limitations in the archaeological record, there is plenty of material evidence which we can use to represent this peripheral culture in its own right. The Iberians used external factors for the development of their own regional identities and must therefore be considered culturally independent from the Hellenistic east. IV. ‘Hellenisation’ in Context The case studies presented in this research show the comparisons between two distinct Mediterranean cultures, as well as highlighting the similarities and differences between two aspects of these societies: the elite reception of external and internal cultural influences and that of the wider non-elite population. Although there would have been socially elite factors involved in the numismatic and burial evidence from Iberia, the depictions on coins served as a wider method for a society to transmit its self-conceptualised identity, and the burials were also representative of wider societal patterns. As a result, it is not important for the scope of this research to identify the elite factors involved in these elements when compared to the royal monuments in Numidia. What is clear, however, is that with both the elite reception and that of the wider population, there were intense levels of creolization in terms of cultural traits as well as physical artistic influences within the material record. Although they served to facilitate the transmittal of the identity of groups of differing social status, their motives nonetheless show a variation on a theme. In Numidia we see the elites adopting a range of foreign techniques in order to further legitimise their power and authority to those of whom they ruled. Quinn characterises this when she claims that “none of this seems to be about exclusive cultural or ethnic identity, but rather about the exploitation of real symbolic sources of power”.[47] Equally, in Iberia we can identify various Greek and other external influences being adopted by a varied but ultimately limited group of Iberian communities that are situated near the foreign colonies. Here, this may not be so much about power but more about facilitating commercial and other interactions with the Greek and Punic colonies on the peninsula. It is because of these adoptions of Hellenistic customs and influences on their material culture, combined with the literary origins of much of what we historically know about these people, that traditional scholarship has seen them to be directly subjected to the process of Hellenisation. ‘Hellenisation’ as a term in its simplest unadulterated form means the transferal of Greek culture to a non-Greek people and may remain useful when discussing the cultural processes that occurred in the vast tracts of land conquered by Alexander and the Diadochi, as well as possibly with regards to Greek influences deriving from earlier colonial efforts. However, the issue derives from its application in modern scholarship which presents it as an overarching and dominating process that in turn does not represent the other perspectives involved. Keay highlights it as “…an asymmetrical term that privileges the Greek over other traditions, such as Carthaginian and Iberian, in the history of the Mediterranean, on the implicit basis that Greek cultural traditions were somehow superior and, as a consequence, the cultural standard to which peoples around the Mediterranean aspired”.[48] This then is where the weakness of the term lies when considering the actual complexities in both elite and non-elite spheres as highlighted in the case studies above. It has become unavoidably linked to a Greco-centric perspective, and the term ‘Hellenisation’ cannot continue unless we also intend to consider terms such as ‘Iberianisation’ or ‘Numidianisation’ as equally significant processes. It is clear from the archaeological evidence from Numidia and Iberia that there were significant elements of ‘native’ influences on the Greeks, as well as other external factors, such the Punic which influenced both the ‘natives’ and the Greeks alike. The use of the term has created a presupposed dichotomy between Hellene and non-Hellene, and as a result does not allow for sufficient representation of the other perspectives in the area of cultural exchange and change in the Mediterranean. Keay raises another issue with the term, in that it “…wrongly decentralizes dialogues about cultural changes away from key local and regional issues towards traditions germane to the eastern Mediterranean, converting peoples such as the Iberians into passive respondants at the western periphery of an eastern dominated oikoumene”.[49] The dichotomy highlighted here that limits the use of the term derives from the assumption that these ‘peripheral’ cultures fit with the cultural standards of the east. I continue to use the term ‘peripheral’ in this paper for convenience, as so with the terms ‘Hellenistic’ and ‘Hellenisation’, yet we must move past the notion that they were somehow peripheral to a centralised Greek world and instead represent the Mediterranean as a complex network of cultures and contacts. In reality, the material evidence shows increased levels of cultural independence among these peoples and a continuation of local traditions that show the term ‘Hellenisation’ to be thoroughly outdated by what should be the modern standards within scholarship. It will be too fragmentary to identify a range of ‘-isations’, which would not represent the Mediterranean as the basin of interconnectivity that it was, whether they focus on the Numidians, Iberians, Carthaginians or any other Mediterranean culture. We must instead look to other terms to explain the processes of cultural change and the breaking-down of cultural barriers that occurred during the Hellenistic Period. I alluded to the work of Versluys in the first section of this paper, and it is the methodology presented by him that inspired me to apply it to the Hellenistic Period. As well as advocating the sole use of material evidence due to the limitations of the literary sources, it was Versluys who saw similar limitations in which the way the term ‘Romanisation’ has been used by colonial and post-colonial scholars, in that it propagates a dichotomy between the “Roman and Native” whilst doing little to represent the immense regional variability, as well as how these ‘native’ cultures influenced the Romans.[50] Cultural change in the Roman Empire varied massively from province to province, and region to region, owing to different customs and motives among their elites, as well as their wider populations and their distinct material cultures and histories. However, the traditional scholarship remains content with advocating one homogenous model. Equally, with regards to the Mediterranean during the Hellenistic Period, many scholars remain content to advocate the homogenous model of ‘Hellenisation’ when in reality the material evidence suggest increasingly complex processes. What I suggest here, following Versluys, is that the term ‘globalisation’ would serve as a better alternative to ‘Hellenisation’, in that it accounts for other aspects of cultural influence whilst removing the unwarranted Greco-centric perspective that the Hellenes culturally dominated the Mediterranean. ‘Globalisation’ would lay the conceptual framework for considering the Mediterranean as an interconnected whole not dominated by a single culture and would also begin to make a case for focusing ‘Mediterranean’ history and culture as a wider term, rather than the predominating classical focus on Greece, and subsequently Rome. The conquests of Alexander opened up the world in a way that had never been seen before. However, although it allowed the spread of Greek culture further than before, it also ushered in a period of willingness among Hellenistic monarchs and Greeks settled around the Mediterranean to adopt many other customs and cultural features. Yet despite this, we continue to consider the cultural distinctions originating in the historiography of the preceding period as adequate to explain a new age of interconnectivity and complex networks of cultural exchange. Unprecedented eclecticism was a feature of most of the major Mediterranean cities during the Hellenistic Period, so too with the variety of cultures and peoples. William Minter recently completed an MLitt in Classics at the University of St. Andrews. Notes: [1] E.S. Gruen, ‘Greeks and non-Greeks’, in G.R. Bugh (ed.), The Cambridge Companion to the Hellenistic World (Cambridge: Cambridge University Press, 2006); C. La’da, ‘Ethnicity, occupation and tax-status in Ptolemaic Egypt’, Eggito E Vivino Oriente, Vol. 17 (1994) [2] J.C. Quinn, ‘Monumental power: ‘Numidian Royal Architecture’ in context’, in R.W. Pragg, and J.C. Quinn, The Hellenestic World: Rethinking the Ancient Mediterranean (Cambridge: Cambridge University Press, 2013), p. 192. [3] Isocrates, Panegyricus 187; Letters to Philip 103. [4] Plutarch, Moralia 328 d-e. [5] R. Morkot, The Penguin Historical Atlas of Ancient Greece (London: Penguin Books, 1996), p. 111; P.M. Fraser Cities of Alexander the Great (Oxford: Clarendon Press, 1996), p. 176; A.B. Bosworth, Conquest and Empire: The Reign of Alexander the Great(Cambridge: Cambridge University Press, 1988), p. 247; N.G.L. Hammond, Alexander’s Newly-Founded Cities (1998), p. 253. [6] Polybius, The Histories 1.3.3-6. [7] J. Droysen, Geschichte des Hellenismus (Hansebooks, 1878) [8] F. Coarelli and Y. Thébert, ‘Architecture funéraire et pouvoir: réflections sur l’hellénisme en Numidie’, MEFRA, Vol. 100 (1988), p. 811. [9] S. Keay, ‘Were the Iberians Hellenised?’ in Pragg, and Quinn, The Hellenestic World, p. 300. [10] Ibid., p. 301; cf. García y Bellido, Hispania Graeca (Barcelona, 1948); M. Almagro-Gorbea ‘L’Hellénisme dans la culture ibérique’, in Akten des XIII. Internationalen Kongresses für Klassische Archäologie (Berlin, 1988); O. Jaeggi, Der Hellenismus auf der iberischen Halbinsel. Studie zur iberichen Kunst und Kultur: Das Beispiel eines Rezeptionsvorgangs. (Mainz, 1999) [11] Quinn, ‘Monumental power’, p. 179. [12] Keay, ‘Iberians’, p. 305. [13] M.J. Versluys, ‘Understanding objects in motion. An archaeological dialogue on Romanisation’, Archaeological Dialogues, Vol. 21, No. 1 (2014), p. 1; Webster and Cooper 1996; Mattingly 1997. [14] Quinn, ‘Monumental power’, pp. 190-191. [15] F. Coarelli and Y. Thébert, ‘Architecture funéraire et pouvoir: réflections sur l’hellénisme en Numidie’, MEFRA, Vol. 100 (1988), p. 811. [16] Quinn, ‘Monumental power’, p. 187. [17] Coarelli and Thébert, ‘Archetcture funéraire et pouvoir’, p. 812 and p. 815; cf. Quinn, ‘Monumental power’, p. 187. [18] G. Camps, ‘Modèle hellénistique ou modèle punique? Les destinées culturelles de la Numidie’, in Actes du III congrès international des études phéniciennes et puniques, Tunis, 11-16 novembre 1992 (Tunis, 1995); B.D. Shaw, ‘A peculiar island: Maghrib and Mediterranean’, in Irad Malkin (ed.), Mediterranean Paradigms and Classical Antiquity (London, 2005), p. 125 [19] S. Lancel, Carthage: A History. Trans. by A. Nevill (Oxford, 1995): 307; L. Poinssot Les ruines de Dougga (Tunis, 1958), p. 59. [20] Ibid., p. 307. [21] Ibid., pp. 305-14. [22] Quinn, ‘Monumental power’, p. 179. [23] Quinn, ‘Monumental power’, p. 181. [24] Quinn, ‘Monumental power’, p. 211. [25] S. Gsell, Historie ancienne de l’Afrique du Nord, Vol. VI, p. 262. [26] G. Camps, Aux origines de la Berbérie. Monuments et rites funéraires protohistoiques (Paris, 1961), p. 200; Camps, ‘Modèle helénistique ou modèle punique?’, p. 247; A. Krandel-Ben Younès, La présence punique en pays numide (Tunis, 2002), pp. 100-2. [27] Quinn, ‘Monumental power’, p. 204. [28] R.C.C. Law ‘North Africa in the period of Phoenician and Greek colonization, c. 800 to 323 BC’, in J.D. Fage, (ed.) The Cambridge History of Africa (Cambridge, 1979), p. 126. [29] Ibid., p. 128. [30] Ibid., p. 133. [31] Ibid., pp. 131-132. [32] Quinn, ‘Monumental power’, p. 210. [33] Keay, ‘Iberians’, p. 314. [34] Dionysius, Roman Antiquities 1.50.2-3; Silius Italicus, Punica 1.273-753; Strabo, Geographica 4.6. [35] L. Villaronga, Corpus nummorum Hispaniae ante Augusti aetatem (Madrid, 1994); J. Untermann, ‘La latinización de Hispania a través del documento monetal’, in M.P. Garciá-Bellido and R.M. Sobral Centeno, La moneda hispánica ciudad y territorio, (Madrid, 1995); cf. Keay, ‘Iberians’, p. 307. [36] Ibid., p. 162. [37] Keay, ‘Iberians’, p. 308. [38] Ibid., p. 303. [39] X. Aquilué, P. Castanyer, M. Santos, J. Tremoleda, ‘Greek Emporion and Roman Republican Empúries’, in L. Abad Casal et al (ed.) Early Roman Towns in Hispania Tarraconensis (Portsmouth, 2007), pp. 21-4. [40] Ibid., pp. 21-4. [41] Villaronga, Corpus nummorum Hispaniae, pp. 140-51. [42] López Borgoñoz 1998: 276-87. [43] L. Abad Casal, ‘El tránsito funerario’, pp. 75-100. [44] F. Beltrán Lloris, ‘Writing, language and society: Iberians, Celts and Romans in northeastern Spain in the 2nd and 1st centuries BC’, BICS, Vol. 43 (1999), pp. 139-40. [45] Keay, ‘Iberians’, p. 302. [46] Keay, ‘Iberians’, p. 301. [47] Quinn, ‘Monumental power’, p. 210. [48] Keay, ‘Iberians’, p. 318. [49] Keay, ‘Iberians’, p. 318. [50] Versluys, ‘Understanding objects in motion’, p. 1, p. 7, p. 12 and pp. 14-15.
- Auferstehung der Idee: The Role of Ideas in Karl Marx and their Neglect in Friedrich Engels
Introduction Karl Marx wrote extensively about history, but he never published, at least in his own lifetime, a theory so comprehensive as the ‘Historical Materialism’ of his followers. The Bolshevik inheritance of an expression more attributable to Friedrich Engels than Marx,[1] ‘Historical Materialism’ posits a strictly material road to communism, a ‘scientific’ inevitability. Or, as Isaiah Berlin sees it, a ‘half-positivist, half-Darwinian’ interpretation of Marx.[2] Looking more closely at what Marx wrote however, reveals, in Karl A. Wittfogel’s words, a work that ‘[is] far more complex than, and profoundly different from, the socio-historical views offered by the Soviet ideologists.’[3] The crux of the profound difference is Marx’s use of ideas, both as a methodology and as a causal variable internal to his analysis. This essay does not attempt to piece together a theory of history that is true to Marx. What it does is argue that it is a mistake to neglect Marx’s idealism in toto and that a true interpretation of his work reflects how he wove together the theoretical remnants of his Young Hegelian past with his materialist critique of the very circle that gave him his intellectual stripes. Indeed, the essay goes so far as to argue that a certain metaphysic is an indispensable frame for Marx’s historical analysis. Section I looks at Marx on alienation, a concept directly borrowed from the Young Hegelians, together with some comments about alienation’s presence throughout Marx’s later manuscripts. This section argues that a study of alienation can shed light on Marx’s use and interpretation of abstraction in other works and, on its heels, section II provides a cross-examination with The German Ideology, along with further comments about the role this work should play in Marx’s oeuvre. In section III, Engels and Socialism: Utopian and Scientific are taken to task for neglecting the insights of sections I and II. Section IV concludes. I. Alienation Marx’s theory of alienated labour was an important conceptual device for his own understanding of capitalist exploitation and serves as a lasting residue of German Idealism in his work. Broadly speaking, ‘alienation’ is the gradual separation and eventual split between a subject and an object, both of which initially formed a composite whole.[4] As a linchpin of Left Hegelianism, the likes of Ludwig Feuerbach and Max Stirner, despite opposed conclusions, both utilised the term to define religion as a type of self-deception, an unconscious ignorance of one’s human nature.[5] Marx sought a more material basis for alienation, namely production, from which ‘human nature’ could be treated more thoroughly. In the Economic and Philosophical Manuscripts, men, or ‘workers,’ are the clear subjects. Discerning the object is more complicated and Marx indicates four, each initially more closely tethered to the worker than the last, with the object-subject distinction even blurring at times. Firstly, workers are alienated from the direct products of their labour. For Marx, ‘nature affords the means of life for labour’ in a double sense: by providing objects on which the worker can perform a physical activity and by fulfilling subsistence needs. There is a pernicious duplexity to these two activities. Because one’s labour is necessary for converting natural miscellany into congestive, subsistence-satisfying goods, the mere act of providing for one’s health in fact defines man’s work qua ‘labour.’ The worker becomes ‘a slave to his object,’ as only his laborious activity can maintain his physical being and vice-versa. In other words, the fruits of his labour have an external power over the worker, a relationship Marx refers to as ‘alienation.’[6] Secondly and in addition to the direct products of his labour, the worker becomes alienated from the actual activity of production itself. Labour is not the essence of the worker; there is nothing inherent to man that pigeonholes him to the role of labourer. All labour then is forced production and exemplary of the worker’s deploying ‘no free physical and intellectual energy.’ His activity is directed against his will and this extrinsic sway, as with the objects he produces, essentially separates the worker from his productive activity.[7] Third, the worker is alienated from his humanity, made up of his own, personal essence and of his relations to fellow man. Marx here differentiates between ‘vital activity’ and ‘conscious vital activity,’ with purely vital activity the stuff of animals, the productive use of physical objects for purely existential needs. The particular form that vital activity takes for a species contributes to that species.’ characterisation Humans appear to have something that goes beyond the purely material, a more cerebral activity that differentiates them from other animals: conscious vital activity. Animalistic productivity is inextricable from survival and progeniture, while man has the freedom to apply his productive capacities in domains that transcend the merely vital, an ability to contrive and reflect on his own world, creating for the sake of, say, ‘the laws of beauty.’ When the product of his labour is wrestled away from him, so too are the ‘nature and the intellectual faculties of his species,’ his creative independence, the very cognitive traits that make him human.[8] Marx’s theory of alienated labour incorporates ideas in two respects, one methodological and the other internal to the actual theory. Methodologically, the above is in itself idealistic and serves to frame, abstractly, discussion of political economic terms rooted in material history. In Marx’s own words, alienated labour is ‘a fact of political economy,’ but, nonetheless, ‘a fact expressed…in conceptual terms.’[9] It is only now that alienated labour has been explained conceptually that Marx can discern how it ‘represent[s] itself in reality’ and its relationship ‘to the development of human history.’ Since only man can enjoy the fruits of man’s labour, it must be that the alien force arrogating the worker’s product, productive activity and humanity, is another man. The external, alienating force, an idealistic space throughout most of Marx’s account, is now filled by a physical body, ‘the capitalist, or whatever one wishes to call the master of the labour.’[10] The inequitable capitalist-worker relationship is, of course, a historical development. But there is an element of continuity as well, one which Marx propounds with a second, this time endogenous application of ideas. Here ideas feature as a historical gel that allows the dominant class, whether consciously or unconsciously, to uphold its dominant position, with political economy as a discipline having a particularly strong influence. Private property and wages, conceptual staples of political economy, are said to be ‘explained by exterior circumstances,’ with no reference to their historical development. Competition and free trade, for Marx the inevitable result of a historical process that gave rise to ‘monopoly, corporations and feudal property,’ are, according to political economy, ‘natural laws’ and the results of ‘fortuitous circumstances.’ True, Marx refers to his own theory of alienation as a ‘law,’ but unlike the political economist, who ‘presupposes as a historical fact what he should be explaining,’ Marx’s theory is developmental, explicated first and historical later.[11] II. The Ideology Marx’s alienation theory made but one appearance among the works published in his lifetime: a short section in The Holy Family (1844). Alienation’s meagre presence in Marx’s published lifetime is remarkable considering its relatively inflated detectability in his posthumous work, appearing in the Manuscripts (1844; source for the example above), the Grundrisse (1857-8), Theories of Surplus Value (1863) and Results from the Immediate Process of Production (1864). Louis Althusser famously postulated an epistemological break away from Feuerbach and idealism in Marx’s writings, marked by The German Ideology and Theses on Feuerbach, both written in 1845. Yet alienation, an idealist inheritance, finds a place in three manuscripts written post-1857, Althusser’s proposed ‘mature,’ ‘scientific’ period for Marx, including two (Surplus Value and Process of Production) well on the mature side of Marx’s supposed ‘transitional’ period of 1845-57.[12] Nor should we doubt the importance Marx attributed to his unpublished notes and manuscripts, at least insofar as they helped clarify his thinking. With regards to Ideology, Marx wrote in 1859 that, despite remaining unpublished, the work achieved its ‘main purpose – the clearing up of the question to ourselves,’ referring to his and Engels’ ideological rift with post-Hegelianism.[13] While it is impossible to prove that Marx would have been satisfied with the form in which the aforementioned manuscripts were published (he referred to the Grundrisse as a ‘real hotchpotch’[14]), they are at the least, and by his own admission, revealing of his thought process. One significant aspect of Ideology is its treatment and adoption of ideas in a manner redolent of alienation theory. Marx and Engels begin their section on Feuerbach with the premise, consistent with species-specific vital activity, that the way in which men produce depends firstly on the nature of the actual means with which they produce, and that ‘what they are’ depends on both ‘what they produce and how they produce.’[15] Once a general characterisation has been made, they move to slightly more concrete claims. A nation’s internal structure, they say, depends on the stage of development achieved by production within in which, in turn, depends on the extent to which the division of labour has grown throughout that country’s history. From there the examples become more directly historical. Since the division of labour determines the productive relationship between individuals, whatever stage of development the former reaches corresponds to a particular form of ownership. Hence human history has progressed from tribal ownership, at an ‘undeveloped stage of production,’ to ancient communal and State ownership, at which point private property creeps in but ‘as an abnormal form subordinate to communal ownership,’ to feudal or estate-property, when a ‘hierarchical system of land ownership’ gives noblemen power over serfs.[16] Though their ‘premises can…be verified in a purely empirical way,’ Marx and Engels nonetheless felt the premises should come before the history. Still, even after the premise is established, the actual historical examples are held in abeyance until a clearer picture of the general historical process emerges as an outline. It is easy to miss the abstraction’s in Ideology since any notion of the authors’ using idealism becomes a suspect proposition when the work’s polemicist tone is taken at face value. There is no shortage of ad hominems for its idealist antagonists, Feuerbach notwithstanding, and the style has disappointed some of Marx’s followers (Franz Mehring judged portions of the work as ‘puerile’)[17]. But accepting the histrionics is the first step to overcoming them, and when one realises that the work has a target it is ostensibly unwilling to compromise with, one gleans a compromise. It is not so much that abstraction has ‘no value whatsoever,’ but that it has no value when ‘viewed apart from history.’ In fact, Marx and Engels seem unable to talk actual history until laying out their abstract premises. Before the subsection on history, Marx and Engels preface by ‘select[ing] here some of these abstractions, which we use to refute the ideologist, and shall illustrate by historical examples.’ Here is the only explicit admission of a philosophical, framing procedure, yet it is the very procedure implied by the example in the previous paragraph. Indeed, ‘facilitating the arrangement of historical material’ is about as much as the authors are willing to grant to philosophy, but their denigrating language belies how essential a recourse to such an arrangement actually is for them.[18] Furthermore, as in the Manuscripts, the work confers a central, moulding role for ideas within its abstract framework. Marx and Engels here take aim at the idealists who, content to descend ‘from heaven to earth,’ ignore the fact that ‘what men say, imagine, conceive,’ are ‘sublimates of their material life-process.’[19] It is real men who produce ideas and, as it is their material circumstances and forms of production that determine what men are, so too can a causal tether be drawn from the material to the ideal. Marx and Engels can then account for the material genesis of ‘the whole mass of theoretical products and forms of consciousness, religion, philosophy, ethics, etc.’[20] Far from just pointing out ideas’ material basis, Marx and Engels give the intellectual sphere a causal function as well, at least once the ideas gush out of their material wellspring. They want to expound ‘the reciprocal action’ of ideal and material.[21] The ruling class, or, rather, their dominant position, is a historical phenomenon, an inevitable result from the expansion of the division of labour and the eventual development of its concomitant: private property. With a material hierarchy established, the ideas start to flow and the ruling class, because it has control over the means of material production, also controls ‘the means of mental production’ to which ‘the ideas of those who lack the means of mental production are subject.’ These ‘conceptive ideologists’ crystalize their hold over the means of production by instilling in the proletariat an ideal version of the very top-down relationship to which they are subject. Throughout history dominant classes have advanced their ideas as ‘rational [and] universally valid,’ ‘attribut[ing] to them an independent existence’ worthy of natural law. Their inferiors (i.e. the proletariat) are mere passive receptors, too active in the production process ‘to make up illusions and ideas about themselves.’[22] III. The Engels’ Strict Materialism Terrell Carver begrudgingly claims that Ideology is an indispensable source for later Marxists[23], a sentiment echoed by Gareth Stedman Jones.[24] Both attribute to it and Engels the underpinnings of a more strictly materialist Marxism than would have gratified the movement’s namesake. It is surprising, then, that some of Engels’ later works differ so starkly with Ideology, the manuscript of which he co-authored, as well as alienation theory. This section proposes to examine some of these differences with reference to Socialism: Utopian and Scientific. According to Engels, any social change or political revolution that will disrupt contemporary ownership relations ‘are to be sought, not in men’s brains,’ but exclusively in changes to the mode of production. He conceives of the conflict between producers and owners, along with the eventual rupture of the system maintaining their disproportionate relations, as occurring ‘independently of the will and actions even of the men that have brought it on.’[25] In addition to the scientific jargon and talk of an independent force, a point scrutinised in more detail below, Engels’ remarks are notable for effectively depriving the proletariat of any determinant agency. On revolution, Ideology speaks somewhat similarly but with an important, subtle difference. Here, two determining elements are presented as necessary before an overthrow: ‘the existence of productive forces’ and ‘the formation of a revolutionary mass.’ While revolution is ‘immaterial’ if these conditions are not met, it does not follow that a force acting independently from the revolutionaries is a sufficient condition for rupture,[26] and there is no suggestion that reaching a certain industrial stage is enough to force a revolution on its own. Looking at the section on alienation in The Holy Family (also co-authored with Engels), further differences on the same point abound. Here, proletariat and private property are taken as opposites and so parts of the same whole, their relationship a parasitic one. The antithetical nature of their coexistence is reinforced by the motivations the relationship impels, with private property ‘compelled to maintain itself, and thereby its opposite, the proletariat,’ while the proletariat ‘is compelled to abolish itself and thereby its opposite.’[27] What is crucial is the personal language, personal in that it lays a possessive onus of liberation on the proletariat and emotively decries the aggrievance that forebodes its insurgence. The only remedy for alienated labour is victory, but not the fated, hands-off victory Engels suggests in Socialism, ‘for [the proletariat] is victorious only by abolishing itself and its opposite … the proletariat can and must free itself’ (emphasis mine). In other words, victory is the toil of the victorious. Furthermore, alienated labour is ‘an indignation,’ a sign of ‘powerlessness,’ a ‘semblance of human existence’ that makes one ‘feel annihilated.’[28] It is difficult to see why such an emphasis on the emotional backlash of alienation is relevant if revolution is an inevitability existing outside the mind of the revolutionary. Carrying on with his scientific language, Engels characterises production as subject to ‘inherent laws” that reveal themselves through social relations and ‘affect the individual producers as compulsory laws of competition.’[29] Competition and its ascription to inexorable laws are the very political economic concepts Marx derides as the intellectual gamesmanship of the capitalist. To add universality to his argument, and so edge further into the ruling class academic territory Marx saw as essentially capitalist, Engels chalks up the extra competition from global markets to ‘the Darwinian struggle of the individual for existence transferred from Nature to society.’[30] Engels’ theoretical synthesis is peculiar in light of evidence that Marx saw comparisons of himself to Darwin unfavourably. Marx disagreed with Darwin’s portrayal of progress as resulting from environmental contingency, claiming that history is driven by man’s conscious life activity and a manipulation of nature for human ends, effectively inverting the direction of influence chosen by Darwin.[31] As if to accentuate his own misunderstanding of history’s developmental nature and the effect these developments have on man’s thinking, Engels manages to grant the workers a prescience that Marx never could. For as long as the capitalist mode of production has existed, Engels says, ‘the appropriation by society of all the means of production has often been dreamed of…as the ideal future.’[32] Yet one of the very reasons for which the capitalist system has had staying power is that the labourer, because of the extent of his physical exertions, has hitherto been a passively received bourgeoise ideas about the fortuity of capitalism. Revolutionary sentiment has not been pent up, at least not in the form Engels describes, replete with the will to appropriate the means of production, an oddly specific remedy for an ill that cannot be so immediately understood. Marx and Engels’ divergence is not so surprising when contextualised by the two authors’ differing reflections on Ideology. As noted above, Marx regarded the manuscript somewhat favourably as late as 1859, mentioning it along with The Poverty of Philosophy as an early theoretical presentation relevant to his more strictly economic work.[33] Engels, on the other hand, in 1886, referred to it as ‘incomplete,’ ‘unusable,’ and exemplary of ‘how incomplete our knowledge of economic history was at the time.’[34] Though rare, disagreements between Marx and Engels did occur during their lifetimes, for instance the latter’s initial, commendatory reaction to Max Stirner’s The Ego and Its Own. In a letter to Marx, Engels extolled Stirner’s egoism, going so far as to say that ‘we are communists out of egoism,’ ultimately reining in his position to agree with Marx’s more unfavourable reception.[35] The exchange, if anything, exemplifies the possibility for theoretical differences, and unfortunately for us any middle ground that could have been reached between the two about Ideology is precluded by Engels’ having written his obloquy on it three years after Marx’s death. Considering Engels’ enthusiastic appraisal of Theses on Feuerbach illuminates further contradictions. For Engels, the Theses was not just a more useful than Ideology as a differentiator between his and Marx’s materialism and that of Feuerbach’s (both were written in the same year), but ‘the brilliant germ of a new world outlook.’[36] Theses VII and VIII are particularly relevant, the former criticising Feuerbach for failing to see that religion is a ‘social product,’ the latter claiming ‘all mysteries which lead theory to mysticism find their rational solution in human practice.’[37] There is a cruel irony in Engels’ adhering so confidently to his materialism that he deprives the proletariat of their role in bringing about revolution and practically ratifies a passive attitude to economic law, in effect a mystical resignation. Furthermore, thesis VII merely suggests that the ideal (in this example, religion) is no causa sine qua non and that any reference to the ideal needs to be materially substantiated, but the extra step of dispensing with the ideal altogether, which step Engels takes by neglecting the ‘mind’ of the proletariat, is never taken here by Marx. IV. Conclusion As divine justice would have it, the year of Karl Marx’s death, 1883, was also the year that birthed the Emancipation of Labour Group in Geneva, Switzerland, the first ‘Marxist’ movement in Russia and an important precursor to the Social-Democratic Labour party that would later split into Menshevik and Bolshevik factions. The Bolsheviks eventually formed the Communist Party that founded the Soviet Union in 1917, the most significant movement to ever champion Marxist ideals.[38] The two greatest conduits between Marx and early Marxism, says Carver, are the Ideology, noted for its ‘scientific’ value, and Friedrich Engels’ staunch dissociation between materialism and idealism.[39] Closer examination of the Ideology, as it was published, reveals subtle inconsistencies with Engels’ late work that mystifies the relative influence of the two. While it is true, given Carver’s indictment of Ideology as only the specious word of Marx, that it is a stretch to interpret the work ‘as an integral whole,’[40] real value can be gleaned from a cross-examination with Marx’s alienation theory, which he formulated early on and which stayed with him later in manuscript form. Cross-examination reveals not a split, but a synthesis of material and ideal, an outlook veiled perhaps by the Ideology’s unfortunately sardonic style. Isaiah Berlin best captures the difference between Marx and Marxism and how Marx’s nuance was the key casualty in the intellectual transferral between the two. Per Berlin, Marx did not set out to create ‘a new philosophical system so much as a practical method of social and historical analysis.’[41] Much that Marx wrote is thus a base methodology that he could later draw from when confronted with practical issues. But the point is to fit the method to the problem, not to write up an irrefutable, facile system that can conveniently address any issue before it even materialises. There is no such subtlety in ‘Marxism,’ which glosses its passivity over with the moniker of ‘science’ and hangs its hat on providential inevitability. Leandro Vargas Llosa has just completed an MA in European History at University College London (this essay was written during his time at the university). Notes: [1] Gareth Stedman Jones, Karl Marx: Greatness and Illusion, 1st edition (London: Penguin Random House, 2017) p. 191. [2] Isaiah Berlin, Karl Marx, ed. by Henry Hardy, 5th edition (New Jersey: Princeton University Press, 2013), p. 114. [3] Karl A. Wittfogel, ‘The Marxist View of Russian Society and Revolution,’ World Politics, 12 (1960), pp. 487-508 (p. 487). [4] Jonathan Wolff and David Leopold, Karl Marx (2021), The Stanford Encyclopedia of Philosophy < https://plato.stanford.edu/entries/marx/> [accessed 2 January 2021]. [5] Douglas Moggach, ‘German Idealism and Marx,’ in The Impact of Idealism, ed. by Nicholas Boyle, Christoph Jamme, Liz Disley and Ian Cooper (Cambridge: CUP, 2013), pp. 82-107 (p. 83-84). [6] Karl Marx, Karl Marx: Selected Writings, ed. by David McLellan, 2nd edition (Oxford: Oxford University Press, 2000), p. 87. [7] Ibid., pp. 88-89. [8] Ibid., pp. 89-91. [9] Ibid., p. 91. [10] Ibid., pp. 93-94. [11] Ibid., pp. 85-86 [12] Louis Althusser, For Marx, trans. by Ben Brewster (London: New Left Books, 2005), p. 34-35. [13] Karl Marx, A Contribution to the Critique of Political Economy, trans. by Nahuum Isaac Stone (Chicago: Charles H. Kerr & Co., 1904), Author’s Preface. [14] As cited in Gareth Stedman Jones, p. 377. [15] Karl Marx and Friedrich Engels, The German Ideology: Parts I & III, trans. by Roy Pascal (Connecticut: Martino Publishing, 2011), p. 7. [16] Ibid., pp. 8-11. [17] As cited in Terrell Carver, ‘“The German Ideology” Never Took Place,’ History of Political Thought, 31 (2010), pp. 107-127 (p. 118). [18] Karl Marx and Friedrich Engels, German Ideology, pp. 15-16. [19] Ibid., p. 14. [20] Ibid., p. 28. [21] Ibid., p. 28. [22] Ibid., pp. 39-40. [23] Terrell Carver, p. 109. [24] Gareth Stedman Jones, p. 192. [25] Friedrich Engels, Socialism: Utopian and Scientific, trans. by Edward Aveling (The Leftist Public Domain Project, 2020), p. 44. [26] Karl Marx and Friedrich Engels, German Ideology, pp. 29-30. [27] Karl Marx, Karl Marx: Selected Writings, p. 148. [28] Ibid., pp.148-149. [29] Friedrich Engels, Socialism: Utopian and Scientific, p. 51. [30] Ibid., p. 54. [31] As cited in Gareth Stedman Jones, p. 167. [32] Friederich Engels, Socialism: Utopian and Scientific, p. 69. [33] Karl Marx, A Contribution to the Critique of Political Economy, Author’s Preface. [34] Friedrich Engels, Ludwig Feuerbach and the End of Classical German Philosophy, trans. by Progress Publishers (Marx Engels Internet Archive, 1994), Foreword. [35] As cited in Gareth Stedman Jones, pp. 189-190. [36] Friederich Engels, Ludwig Feuerbach and the End of Classical German Philosophy, Foreword. [37] Karl Marx, Karl Marx: Selected Writings, p. 173. [38] Samuel H. Baron, ‘Plekhanov and the Origins of Russian Marxism,’ The Russian Review, 13 (1954), p. 38. [39] Terrell Carver, p. 122. [40] Gary K. Browning., ‘The German Ideology: The Theory of History and the History of Theory,’ History of Political Thought, 14 (1993), pp. 455-473 (p.455). [41] Isaiah Berlin, Karl Marx, p. 112.
- Consumption as an analytical category for understanding the Safavid state and its society
Legend has it that coffee beans were first discovered in Ethiopia by a goat herder who had noticed his goats become more energetic and unable to sleep at night after consuming the dark-coloured ‘berries’ of a certain tree.[1] It is likely, however, that the realistic approach to this tale would be that Ethiopia was home to ‘undomesticated coffee varieties’ trees, and that coffee was consumed as a food group by indigenous Ethiopian tribes.[2] Yemen remains the territory where coffee production found itself, making coffee one of the only commodities that were not brought from the New World nor Europe. The name commonly employed in Persian and referring to the beverage is qahva, which does not vary significantly from its Arabic derivative, qahwa, and is only slightly persianised through differing pronunciation. Its entrance into Safavid society (1501-1722) is clouded with mystery, while many studies tend to point out the domino effect caused by the development of coffeehouses in major Middle Eastern cities, and Istanbul in particular. This essay focuses and argues in favour of the alleged popularity of coffee in Safavid Iran, often downplayed by the idea that tea is nowadays considered the national drink of the country. Focused on consumerism in Safavid society, the paper is segmented into six sections, the first three attempt to understand how, where and by whom coffee was consumed in Safavid Iran and the ideas that can be deducted about Iranian society at that time. The fourth section will consider the legal and religious aspects and debates around coffee drinking, while the last one is focused on the place of Safavid Iran in a globalised world and what can coffee importations tell us about state policies and actors at play. Finally, one section will tackle issues found in scholarly literature, essentially covering the limitations posed by the nature of the topic studied and how can new tools can be used to create a vaster repertoire for Safavid History. First mentions of coffee in the Safavid world ‘As is known, medicaments for Ibrahim (r. 1640-1648) were dissolved in coffee, in keeping with a frequent practice common in Ottoman medicine. In this context, Krusinski cites an anecdote about a concubine of the Persian Shah Ismail I Safavi (r. 1501-1524) who complained about her master’s dependence on coffee and his consequent lack of interest in her.’[3] Matthee is quick to argue that at first, coffee was widely consumed as a bitter, but tasty beverage, and that it was known in Safavid Iran initially as a beverage with medicinal properties.[4] Goushegir establishes firmly that aside from tea and tobacco, the first mentions of coffee have been found in medicine treaties and pharmacopoeia in the early 16th century, describing it as a ‘fully acknowledged’ medical substance.[5] The quoted anecdote of Ismail’s knowledge and consumption, apart from sounding like royal gossip, not only demonstrates how early coffee was introduced and consumed in the royal circles, but also how conscious some were about its overall effects if consumed in excess. As for the introduction of coffee within the Safavid territory, credit is given to the Sufis, who supposedly ‘quickly adopted coffee because it helped them stay alert during their nighttime devotions’.[6] ‘As Marshall Hodgson points out, with characteristic perspicacity and precocity, the phenomenon of fascination with and popular recreational use of mind- or mood-moulding substances was ‘of special import for the growth of a human personality’ in the Islamdom of the post-Mongol era.’[7]Kazemi explores the emergent popularization of simultaneous consumerism, in particular, the marriage of tobacco and coffee.[8] He argues that once the combination was created and consumed across most social classes in coffeehouses with an adequate setting, its addictive pattern contributed to its large-scale consumerism, at least in Safavid urban centres.[9] To sum up, ‘the trio of tobacco, coffee, and coffeehouses serve as markers of a great cultural change, the political and social unification of the eastern Mediterranean under Istanbul and the start of the modern age in the Middle East.’[10] The coffeehouses, observations and roles To put it simply, the very presence of coffeehouses — or special places named after the drink and designed to enjoy it — is, I believe, very telling as a social phenomenon. Designing places dedicated to serving one main drink presented quite the achievement for coffee. Also commonly associated with a recreational place, the first coffeehouses in Istanbul appeared to be places of discussion and intellectual thriving: ‘The first coffeehouse at Istanbul was opened in 962/1555 by two men from Damascus; it was frequented by so many dignitaries, writers, and poets that it became known as the “academy of scholars”.[11] Those two Syrians, Hakm and Shams, were identified by the Ottoman chronicler Ibrahim Pecevi.[12] The coffeehouse thus appears to be a place of Arab origin later adopted with great enthusiasm by the Turks.[13] The appeal does not end there given that it is around a similar period of time that coffeehouses start to emerge in Safavid Iran, showing the connectivity of the Ottoman-Safavid world: ‘It is probable that in Persia the first coffeehouses appeared during the long reign of Shah Ṭahmāsb (930-84/1524-76), though there is no mention of them in the sources before the reign of Shah Abbās I (996-1038/1577-1629), when several were opened in Qazvīn, Isfahan,ʿolamāʾ and other cities.[14] As for coffee on its own, ‘no Persian chronicles refer to coffee until the 1590s. Similarly, none of the foreign travelers and merchants who visited the country in the sixteenth century makes any mention of coffee as either a trade commodity or a consumer item.’[15] However, the tone of certain merchants’ accounts seems to change by the end of the 17th century, suggesting a drastic shift in consumption culture right in the middle of the 17th century, the peak time of the Safavid period. The major illustration of this shift was left by the account famously written by French traveller Jean Chardin, who travelled to Iran between the 1660s and 1670s and left us with a descriptive account of the social aspect of a coffee house: ‘These houses, which are big spacious and elevated halls, of various shapes, are generally the most beautiful places in the cities, since these are the locales where the people meet and seek entertainment. Several of them, especially those in the big cities, have a water basin in the middle. Around the rooms are platforms, which are about three feet high and approximately three to four feet wide, more or less according to the size of the location, and are made out of masonry or scaffolding, on which one sits in the Oriental manner. They open in the early morning and it is then, as well as in the evening, that they are most crowded… People engage in conversation, for it is there that news is communicated and where those interested in politics criticize the government in all freedom and without being fearful, since the government does not heed what the people say. Innocent games [...] resembling checkers, hopscotch, and chess, are played. In addition, mollas, dervishes, and poets take turns telling stories in verse or in prose. The narrations by the mollas and the dervishes are moral lessons, like our sermons, but it is not considered scandalous not to pay attention to them. No one is forced to give up his game or his conversation because of it. A molla will stand up, in the middle, or at one end of the qahvehkhaneh, and begin to preach in a loud voice, or a dervish enters all of a sudden, and chastises the assembled on the vanity of the world and its material goods. It often happens that two or three people talk at the same time, one on one side, the other on the opposite, and sometimes one will be a preacher and the other a storyteller.[16] The detailed descriptions of the traveller play in our favour here for our own interpretation of the topic. It is worth noting too that we would probably not get such accounts through the eyes of a local, since it came to be part of daily life. Noticeable details of this account include the extraordinary buildings forming the coffee houses, the recreational aspect and overall a place of escape from most public duties. The ‘Oriental manner’ here is to be considered with caution. We cannot in any way see it as an early form of ‘Orientalism’ as defined by Said. The idea referred to by Chardin here has more to do with the emergence of a trendy adoption and westernised take on certain Turkish and other Middle Eastern lifestyles or habits in the West – ‘Turkish coffee’ or alla turca – being one of them.[17] Kafadar — and other scholars — note that ‘the emergence and spread of coffeehouses in Istanbul (as well as Cairo, Aleppo and other relevant cities) coincided with various other dynamics and processes of the early modern era’. He continues by listing three major processes: New levels of urbanization with a rise of a bourgeoisie; Increasing use of the night-time for socializing and entertainment and the rise of new forms of entertainment’.[18] Seeing how such places could impact daily life and the social understanding of classes truly is an important aspect that deserves more attention. Coffee drinking in Safavid Iran: features of consumption ‘Drunk hot and without sugar from little china cups, it was often served with sweets and pistachio nuts, both of which enhanced its flavor, as Tunakabuni asserted.[19] Coffee, Kaempfer noted, was predominantly a winter drink; in the summer Iranians preferred sherbets.’[20] Intuitively, one could suspect that this describing ritualisation around coffee drinking could only be accessed by a certain social class given the fluctuant price of sweets and nuts. And even though Iran is currently the world’s top pistachio exporter, it still has been a commodity associated with wealth. Relevantly, ‘in the late Safavid chronicle Dastur-i shahbriyaran, moreover, we read how on the occasion of the visit to Mashhad of the Mughal Prince Akbar in 1696 an official banquet was organized during which coffee, rosewater, and sweets were served prior to the appearance of 150 dishes of food and an equal number of plates and sweetmeats.’[21] To avoid any confusion, Muhammad Akbar (1657-1706) was the exiled son of Mughal Emperor Aurangzeb (1618-1707) who after rebelling against his father for the control of the Deccan, stayed and died in Mashhad. Therefore, any kind of royalty — an exiled and fallen prince — would be given access to coffee, suggesting coffee was largely served in most formal settings.[22] The qahve — and ritualised drinking process — according to Matthee, travelled across social spectrums but had different preparing steps when it came down to the royal court: ‘Coffee consumption in Safavid Iran involved a wide range of the social spectrum, beginning with the royal court. Indeed, coffee was a fixture in the shah’s very household, for a member of the royal retinue was invariably the coffee master, the so-called qahvahchi-bashi. The royal palace in Isfahan included a “coffee kitchen,” in which the coffee consumed by the royal household was stored, roasted, and prepared under the supervision of this official.’[23] Realising that the court would have gotten a dedicated kitchen space exclusively reserved for coffee making is not only fascinating but mostly goes to show the royal consideration coffee had achieved. Shah Abbas (r. 1588-1629), for example, was among the only Safavid kings known to have pushed forward consumerism for the promotion of the state. He appears to have been a known adept of coffee drank at coffeehouses: ‘the visits Shah `Abbas frequently paid to the coffeehouses of Isfahan are reflected in the accounts of numerous foreign visitors.’[24] In continuation with coffee service in Safavid time, ‘Du Mans claimed that, after tobacco, coffee was the second item offered to guests in Iran.’[25] Yet, Matthee notes that it would be hasty to believe coffee to have penetrated most rural spheres by the end of the Safavids.[26] Kaempfer has identified records of coffeehouses and coffee making only in a handful of villages in southern Iran, namely Shabaran, Imamzadah, and Asupas mainly through, once again, travellers’ accounts.[27] Additionally, ‘by the mid-seventeenth century, coffee was a standard beverage in governing circles outside the royal court as well. Speelman in 1652 called coffee a “very common drink” in the country […].’[28] There is really little information on particularly authentic Iranian ways to make coffee, while Hattox who compiled sources across the Middle East, has concluded that ‘in preparing coffee, three basic sorts of materials were involved: water, ground coffee, and additives. Of the first there is little to say. None of the sources makes any mention of special considerations concerning the water used. The question of additives was seldom if ever used, while milk was almost never added.’[29] These instructions coincide with what is known as Turkish coffee, the most mainstream way of consuming coffee in Safavid Iran and the Near East at large, just black with no sugar, creating a balance between the bitterness and the eventual sweets or nuts offered. Apart from unique Turkish coffee vessels like the cezve, there are is mention of similar utensil used in Iranian households in Safavid society either. However, it is not difficult to assume that special pots made with local or non-local materials would have come to be used, especially given the importance of service in coffeehouses and generally in Iran. Safavid modes of coffee consumerism, debates around its prohibition ‘Oh black-faced one whose name is coffee, killer of sleep, destroyer of lust.’[30] The ambiguity behind this Persian proverb is highlighted by the literary use of negative-sounding doer nouns, and ultimately, it would be hard to grasp whether or not ‘killing sleep’ or ‘destroying lust’ were considered bad actions in common imaginary. Even though the idea of prohibition of coffee drinking might sound relatively absurd given everything that has been discussed so far, it is not irrelevant to have a look at the considerations in Islamic law for the consumption of such popular psychoactive substances. It is worth noting that ‘while not an intoxicant like wine, coffee does indeed possess some noticeably stimulating properties, and can have profound effects, both mental and physical, on the drinker.’[31] Present in medicinal records and consumed in enclosed public spaces, side effects of coffee were known – like in the anecdote of Ismail’s alleged coffee overdose. Adopted by Sufis, it is interesting to see that ‘from the tenth/sixteenth century, coffee consumption spread from Yemen northwards, mainly via the Sufis and their disciples, who claimed that drinking coffee helped their ritual activity. This caused an extended debate among the ulama of different schools, who viewed the Sufis’ coffee drinking as a negative innovation opposed to the shari’a.’[32]Coffee drinking, in these terms, appears to be a topic of argument between factions of Islamic experts and even a way to discredit further the Sufi order. ‘The supporters of forbidding coffee drinking were mainly ulama in official positions such as judges.’[33] It is typical in socio-politics to find religious tribunals wanting to get more control. However, as coffee became widespread, the lack of religious proof for its prohibition proved to be the downfall of allegations. ‘Due to the inability of the ulama to implement the prohibitions and religio-legal rulings, the authorities needed to become involved and they were asked to help the ulama combat this phenomenon.’[34] While coffee, unlike wine, is not explicitly mentioned in the Qur’an, presumably because it was undiscovered by the time the sacred text had been written, however ‘literally, coffee’s very name can explain part of the ulama’s objections to the beverage. All medieval Arabic dictionaries are united in interpreting the word qahwa as signifying wine (khamr), which is forbidden by the Quran.’[35] The gaps found in penal Islamic law regarding the consumption of beverages like coffee or tea assumably gave the monopoly for Safavid’s promotion of coffee drinking. Islamic law did not present a major obstacle in that case, nor did it for wine consumption in Safavid Iran, especially given the long-standing importance Persian wine holds in Persian identity and culture.[36] Kazemi implies that more than a prohibitive society, Safavid Iran as it evolved in time and space, was, if not actively promoting, allowing for a daily-life culture centred around consumption and recreational activities: ‘People often consumed these substances together or replaced one for the other. Together these stimulants constituted a larger economic “drug complex” which had a crucial role in both the internal and external trade of Iran, and were as such central to the development of its early modern economy.’[37] Overall, and especially in the case of Safavid Iran, it appears that given the spread of coffee consumption to private spheres (houses), and the state-sponsored initiative of such consumerism, implementing laws against a drink that had long been considered to have medicinal properties and that is not explicitly mentioned in the Qur’an would not have been the smartest policy on the Safavid side. This is particularly revealing of the power exerted by the State, but also how vocal the ulama would be on daily-life practice and consumerism. Gaps in literature and non-consideration of materiality: Opting for what might come off as a narrow case study, the historical point of view on coffee consumption unfolds certain gaps found in modern Safavid scholarship and consideration for material culture. Noticeably, aside from a handful of well-established Safavid scholars — most of them relying on each other’s works — the study of coffee and consumption, in general, is sparse, lacking, and definitely leaves a curious mind begging for more. The booming popularity and institutionalisation of coffee and coffee shops worldwide in our age should signify a gain of interest in the history of this commodity, regardless. It would be particularly compelling for Middle Eastern and Iranian scholars to highlight the roots and development of the ritualisation of coffee drinking given the struggles met my area studies in general for academic recognition. This issue, I believe, is the natural result of the traditional state-focused approach to history, and Safavid history does not get spared from that. Poor consideration for cultural History, while strongly backed, presents a bigger challenge in delving into consumerism. As recent as active studying and sponsorship of Safavid history goes, it seems only natural that Iranologists struggle to manipulate new analytical tools, including materiality. Studying patterns of consumption through our limited scope of written sources also presents limitations in the sense that we are quickly drawn to conclusions — in the sense that if we were to say that medical records are the most widely available sources we have at our disposition, then one would be tempted to conclude that most circles in Iran would drink coffee to get past their seasonal cold, while we clearly got to see, that coffee drinking quickly was recognised for what it is supposed to be like, a beverage with stimulating properties. ‘The uneven distribution of sources makes it impossible to draw definitive conclusions about the relative popularity of coffee and tea in Safavid Iran. […] Yet contemporary sources suggest that, in terms of availability and volume of consumption, neither tea nor coffee matched water and sharbat […] and that, in terms of comparative popularity, tea in Safavid times was a distant second to coffee at least outside the northern region.’[38] Matthee and Kazemi appear to be among the most prominent Iran and Safavid English-speaking historians to tackle consumption patterns in Iranian elite and non-elite circles. Their will to fill these gaps does compensate for the lack of data available, as it gives them a bigger chance to showcase academic creativity and interpretation. Matthee, in particular, notes that ‘ironically, whereas the introduction and dissemination of coffee in (northern) Europe is fairly well documented until quite recently its history was much less well known in its west Asian lands of origin and early spread.’ [39] He adds, in a paper focused on Iranian cuisine, that ‘early modern Iran shares the underdeveloped state of research on consumption with many parts of the non-Western world. Most of the (little) work done on Iran before the twentieth century involves production for export — silk and opium, most prominently— rather than consumption. Food —its origins and ways of preparing and consuming it — has received some attention albeit far less than Persian cuisine, recognized as one of the world’s most sophisticated, deserves.’[40][...]‘For long periods of time we know little about consumption patterns other than those pertaining to the high elite, monarchs, and their entourage of courtiers and administrative officials. Persian-language sources, rarely concerned with the materiality of life, provide little information on consumption. Several historical cookbooks, or rather food manuals, exist but, valuable and informative as they are, these reflect the static and formulaic taste of the elite.’[41] ‘It has long been recognized that Ottoman archival material holds a wealth of information on trade and other matters of interest to the economic historian. In spite of this, most studies of the commercial life of the Middle East during the sixteenth and seventeenth century have continued to rely mainly on the impressions and records of European merchants, travelers, and diplomats.’[42]Regardless of how essential and relevant Ottoman sources are for this case study, they limit our scope of interpretation and imply that patterns of consumption were identical in Safavid Iran while we know how localised cultures can get all over the Middle East. One thing that should be noted though is the relatively easy access to information about coffeehouses rather than coffee drinking on its own, which in a way, falls down under the category of consumption history. It would indeed be dishonest not to consider coffeehouses as part of studies around materiality. Scholars as we have seen, mostly base their analysis of such places on travelers’ accounts, which is both a good and bad thing. For one they can misrepresent the reality to appeal to an audience or exaggerate it. Commerce and trading dimension of coffee drinking As we raised the challenges coming with relative scholarly considerations for our question and failed attempts at forbidding coffee consumption by religious leaders, there is another aspect through which the study of coffee is revealing of Safavid society and functioning, which would be commerce and trade. Once again, the data for that is sparse and always has to be considered with attention, but there is admittedly more to find in scholarly literature. An important thing to note is that coffee production on Iranian lands during Safavid rule had remained a difficult project since climate conditions would not allow for it: ‘While coffee required a specific climate, soil, and overall conditions absent in Iran, tobacco appeared suitable to flourish in various parts of the country.’[43] To perform the art of dual consumerism of tobacco and coffee, one has to be looking at claims and evidence highlighting major trading partnerships of the Safavid for coffee. They firstly imported it from its originating territory: ‘coffee trade between Yemen and Iran continued to be profitable for some time. It was only over the course of the eighteenth and nineteenth centuries that production from the Antilles, India, and Java displaced this trans-Arabian network, but by then coffee was no longer the most sought-after social drink in Iranian society. It had become a ceremonial offered in formal settings.’[44] Even though we have established that the very first apparition of coffee in Iran predates the 17th century, it remains extremely interesting to find out that trading corporations like the East India Company (EIC) or the Dutch East India Company (VOC), respectively founded in 1600 and 1602 amid a trading rivalry, acted as the mediums for the importation of the good to Safavid Iran: ‘The first EIC suggestion that coffee might be shipped to Iran was made in 1619, long before the directors in London requested it for the home market’[45] but, ‘it was not until 1628 that the VOC first bought coffee destined for the Iranian market.’[46] If the importation of coffee was suggested this early in Iran — before it was in Europe — it can only mean one thing about the level of demand for it. The Dutch also understood the potential of coffee imports to Iran: ‘Tavernier observed that the Dutch on their voyage back without coffee from Mokha loaded their ships with coffee at Hormuz. Coffee as a popular drink was profitable and in great demand from Hormuz; it was exported to all Persia and even to ‘great Tartary’ whereas coffee exported from Basra was distributed all along the Euphrates and other Turkish provinces.’[47] The Kingdom of Ormus (11th century-1622), was annexed by the Safavids in 1622 after centuries of Portuguese domination. Its strategic situation as a maritime hub and home to powerful ports (Bandar-e Abbas), made it the ideal gateway to trading monopoly eastwards and was at the heart of the Ottoman-Portuguese conflicts in the Persian Gulf (1538-1559). These elements allow scholars to consider Safavid Iran as part of the “early globalization” dynamic with unevenly distributed effects. Indeed, ‘other than as an early modern exporter of silk and bullion and an importer of massive amounts of Asian spices and Indian textiles, the country in Safavid times […] is not known for its extensive commodity exchange with the outside world.’ [48] And interestingly enough, the same seems to apply to the Ottoman case: ‘Most subjects studied here were not part of the high-volume trade with Europe, largely because they were not mass commodities in the Ottoman empire. Manuscripts, costume albums and, in the early years, even coffee entered Europe in small quantities as gifts and for personal use.’[49] Coffee trade, thus, is arguably one of the domains through which the Ottoman and Safavid Empire appear to have been somewhat of a unified region, with established trading routes like the Basra one and relative dependency on European trading companies: ‘these cross-regional and trans-imperial merchants— especially Ottoman and Iranian ones — also followed the Basra route, as well as the two overland routes that connected Baghdad to Kermanshah and Erzurum to Tabriz. These same merchants were also involved in the distribution of coffee in various parts of the Ottoman Empire.’[50] Basra, today an Iraqi city place at the southeast border with Iran, was also, and strategically, under the military protection of the Portuguese until Abbas I, seized it with the help of the English, demonstrating how deeply the structure of geopolitics had evolved in the space of years. Conclusion Kazemi estimates that the Safavid state underwent a ‘psychoactive revolution’ through which coffee found its way through different boards of Safavid society. First known and use as a medicinal beverage and popularised through Sufis and state-lead promotion, Islamic law judges did not stand a chance in prohibiting the beverage given the liberty offered by the institution of coffeehouses. By using a varied set of resources nonetheless, the paper has attempted to retrace ways through which coffee may or may not have been consumed with. We pointed out issues such as uneasy access to sources and traditional historiographical approaches that present obstacles in studying consumerist patterns. The gained popularity of coffee was gradual and dependent on imports from neighbouring countries and privileged relationships with trading companies. Only if the concept of the coffeehouse had not taken the same way it did in surrounding empires, there is little doubt that many social and cultural aspects would be different in Iran today, even though coffee has been outmatched by tea in terms of popularity from the Qajar era, the drink remains part of daily-life consumption. Also, narrowing down the study of coffee to one country and one given period also demonstrates the immediate and long-lasting effects the introduction of a good can have on all scales of society. Charlotte Hocquet is currently doing an MLitt in Middle Eastern History at the University of St. Andrews, having graduated from SOAS University of London. Full question of essay when assigned: To what extent is 'consumption' a useful analytical category via which Safavid state and society can be understood? Draw on the example of at least one commodity to discuss. Notes: [1] https://www.ncausa.org/about-coffee/history-of-coffee [2] Catherine M. Tucker, Coffee Culture: Local Experiences, Global Connections (New York, 2011), p. 36. [3] Anna Malecka, “How Turks and Persians Drank Coffee: A Little-known Document of Social History by Father J. T. Krusinski”, Turkish Historical Review, Vol. 6, No. 2 (2015), p. 186. [4] Rudolph Matthee, The Pursuit of Pleasure: Drugs and Stimulants in Iranian History, 1500-1900 (Princeton, 2005), p. 146 [5] “C’est dans les traités de médecine et de pharmacopée compilés au xviesiècle, que l’on trouve les premières mentions sur le café, sa préparation et sa consommation en Iran. Le café est décrit d’abord comme une substance faisant partie d’un ensemble médicinal et pharmaco-thérapeutique comprenant le thé et le tabac.” https://books.openedition.org/iremam/2641?lang=en [6] Tucker, Coffee Culture, p. 36 [7] Cemal Kafadar, ‘How Dark is the History of the Night’, pp. 243-244 in Arzu Ortürkmen and Evelyn Birge Vitz, Medieval and Early Modern Performance in the Eastern Mediterranean (2014). [8] Ranin Kazemi, ‘Doctoring the Body and Exciting the Soul Drugs and Consumer Culture’, Iranian Studies, Vol. 49, No. 4 (2019), p. 606. [9] Ibid. [10] Uzi Baram, ‘Clay Tobacco Pipes and Coffee Cup Sherds in the Archaeology of the Middle East: Artifacts of Social Tensions from the Ottoman Past’, International Journal of Historical Archeology, Vol. 3, No. 3 (September 1999), p. 141. [11] B. Kik, “Šadarāt fī asl al-qahqa’, al-Mašreq, Vol. 6, No. 14 (1903), p. 689 found in https://www.iranicaonline.org/articles/coffeehouse-qahva-kana [12] Baram, ‘Clay Tobacco Pipes and Coffee Cup Sherds’, p. 142 [13] Ralph S. Hattox, Coffee and Coffeehouses: The Origins of a Social Beverage in the Middle near East (Seattle, 1998) p. 76. [14] Ibid. [15] Matthee, Pursuit of Pleasure, p. 146. [16] https://www.bourseandbazaar.com/articles/2016/9/19/irans-cafe-culture-and-consumer-culture [17] A. Bevilacqua and H. Pfeifer, “Turquerie: Culture in Motion, 1650-1750”, Past & Present, Vol. 221, No. 1 (2013), p. 94. [18] Kafadar, ‘How Dark is the History of the Night’, p. 244 [19] Tunakabuni, Tuhfah-i Hakim Mu’min, p. 697 cited in Matthee, Pursuit of Pleasure, p. 160 [20] Ibid, p. 160. [21]Nasiri, Dastur-i shahriyan, p. 117 cited in Matthee, Pursuit of Pleasure, p. 161. [22] Ranin Kazemi, ‘Tobacco, Eurasian Trade, and the Early Modern Iranian Economy’, Iranian Studies, Vol. 49, No. 4 (2019), p. 617 [23] Kaempfer, Ann Hofe des persischen Grosskonigs, p. 152 cited in Matthee, Pursuit of Pleasure, p. 161. [24] Ibid, p. 161 [25] Du Mans, “Estat’de 1660,” cited in Francis Richard, Raphael du Mans, missionaire en Perse au XVII siècle, Vol. 2, pp. 75-76. [26] Ibid, p. 164 [27] Kaempfer ‘Reisetagebucher’, p. 41 cited in Ibid, p. 164 [28] Ibid. [29] Hattox, Coffee, p. 83. [30] Matthee, Pursuit of Pleasure, p. 144. [31] Hattox, Coffee, p. 46. [32] Hatim Mahamid and Chaim Nissim, ‘Sufis and Coffee Consumption’, Journal of Sufi Studies, Vol. 7, No. 1-2, (2018) p. 140. [33] Ibid. [34] Ibid, p. 159. [35] Ibid, p. 141. [36] Hattox, Coffee, p. 46. [37] Kazemi, ‘Tobacco’, p. 614 [38] Matthee, ‘From Coffee to Tea: Shifting Patterns of Consumption in Qajar Iran’, Journal of World History, Vol. 7, No. 2 (1996), p. 205 [39] Matthee, Pursuit of Pleasure, p. 145 [40] Matthee, ‘Patterns of Food Consumption in Early Modern Iran’ (Oxford, 2022), p. 2 [41] Ibid. [42] Andräs Riedlmayer, ‘Ottoman-Safavid Relations and the Anatolian Trade Routes: 1603-1618’, Turkish Studies Association Bulletin, Vol. 5, No. 1 (March 1981), p. 7. [43] Kazemi, ‘Tobacco’, p. 618. [44] Ibid., p. 617. [45] Matthee, Pursuit of Pleasure, p. 148. [46] Ibid., p. 150. [47] Iftikar Khan, ‘Coffee Trade of the Red Sea in 17th and 18th Century’, Proceedings of the Indian History Congress 1996, Vol. 57 (1996), p. 307. [48] Matthee, ‘Patterns of Food Consumption’, p. 3. [49] Bevilacqua and Pfeifer, ‘Turquerie’, p. 78. [50] Kazemi, ‘Tobcco’, p. 617.
- The Association of Sin with Filth and Stench in Later Medieval Christianity
Few things preoccupied the medieval Christian church more than the notion of sin: because of mankind’s propensity to corruption the church as an institution existed to guide those who fell under its influence toward morally good actions, and away from demonic temptations. It is also true that few things played as central a role in the experience of ordinary people in later medieval Europe than the stench and filth which predominated. The poor sanitation systems of medieval towns are well-documented;[1] and the limited availability of latrines forced the majority to simply relieve themselves in the street, meaning that foul odours were ubiquitous in medieval towns.[2] It may be unsurprising, therefore, that these phenomena so central to later medieval life should converge: both provoke reactions of disgust, and crucially, both were understood to be hazardous to physical as well as spiritual wellbeing. The link between them was established in theology, communicated through sermons and popular culture to the masses, and it helped to shape later medieval Christian societies in law and in culture. It is therefore the aim of this essay evaluate stench, filth, and sin in terms of their roots in theological and scientific discourse, their communication to the laity, their role in writing legal and cultural norms and their influence in shaping and structuring the Christian community. On this analysis, it is possible to conclude that the later medieval church used the association of stench and filth with sin to make Christian doctrine more accessible and tangible to the laity, to better regulate Christian behavioural norms, and to make specific boundaries both around and within the Christian community – though not always entirely successfully. What made this pursuit so effective was that the sensory perception of stench and filth was so visceral and universal, creating a reference-point by which (almost) everyone could better conceive of sin. However, before this analysis can be conducted, it is necessary to explain the theological foundations of the stench, filth and sin. With roots in early medieval and even late antique Christianity, the idea that stench was associated with sin was buttressed by a robust logic. Since things that tended to smell bad – rotting food, excrement, or human and animal corpses – were all examples of natural decay, it followed that, in the spiritual realm, they represented a moral decay of some kind.[3] If stench represented sin, then the converse was also true: piety was represented by fragrance, with the Garden of Eden’s odoriferous flowers underpinning the notion that moral virtue carried with it good smells. This notion pervaded later medieval Christian writings. William of Malmesbury told of how the Anglo-Saxon princess Mildburh was discovered beneath the floorboards of Wenlock Abbey, the pleasant aromas of balsam arising from her body unmistakably identifying her as a woman of good faith;[4] and conversely, Adam of Eynsham’s Vision of the Monk of Eynsham characterised Hell by the sulfuric odours and the stench of lustful sweat which indicated the sinfulness of its inhabitants.[5]But there were further layers to this relationship. The Christian cosmos was characterised by the contrast between that which was ‘higher’ (or Heavenly) and that which was ‘lower’ (or Hellish); and the human body was seen to symbolise this cosmological ordering. Thus, the upper body (capable of exercising reason and observation) was associated with piety, whereas the lower body (capable of lustful actions and of producing foul-smelling waste) was considered a bastion of sin. Indeed, excrement, being a necessary aspect of human existence, served as a reminder of humans’ corporeality: fleshly beings produced filth precisely because their flesh was corrupt and impure – a direct result of man’s original sin.[6] The upper body, by contrast, was emphasised for the crucial role it played in battling sin. If stench indicated a demonic presence, then the nose could act as a means to spot the potentially morally hazardous. This was suggested in the Biblical metaphor of the “nose… like the tower of Lebanon” (Song of Songs, 7:4), where the sense of smell performs the function of a watchtower guarding against enemies; and similarly, Bridget of Sweden wrote of a corrupted priest, whose cut-off nose implied an inability to distinguish between what was pious and what was sinful.[7] On a theoretical level, therefore, stench and filth were seen as physical manifestations of humans’ propensity to sin; the implications of this relationship meant that the sense of smell was crucial to good Christian conduct. How the church communicated these ideas to the wider public could be interpreted as a way of making abstract theology more tangible and accessible. In so doing, it became easier to encourage the laity to resist the latent corruption around them. Sermons being the primary point of contact between the laity and academic theology, that so many of them appealed to the sense of smell should imply that many lay Christians were encouraged to think of sin in olfactory terms. For example, the fourteenth-century author of French preaching texts Pierre Bersuire integrated medical knowledge about the production of stench from heat into his sermons, linking the physical sweat of sexual activity with the spiritual stench of lust.[8] Jacobus de Voragine, too, explained the spiritual significance of leprosy through the smell of the afflicted: “Leprosy, inasmuch as it is a fetid illness, signifies the sin of lust that stinks before God and men”; in this case, the social stigma surrounding lepers is both a symptom of and a punishment for their sins.[9] By tying the abstract (sin) to a familiar physical reality (leprosy), preachers like Jacobus and many others could put theological doctrine into more individually relatable terms. Against this, the charge may be raised that no matter how engaging the oral delivery, many sermons were conceptually beyond the grasp of the laity, making little impact on ordinary listeners. And while there was undoubtedly a gap in levels of engagement between the educated and non-educated, this argument ignores the long-term popularity of many authors whose preaching texts contained complex scholastic theology and medical knowledge, which Katelynn Robinson has argued indicates a solid foundational understanding among the laity.[10] Additionally, this charge fails to recognise the totality of the laity’s experience of religious ideas, which filtered down into cultural output. We can see it in Chaucer, whose ‘Parson’s Tale’ told us that foul-smelling hell was the punishment for sinners (‘The Parson’s Tale’, X.208-210);[11] and even in lesser-known chronicles like those in the fifteenth-century Cent Nouvelles Nouvelles, which told of a knight’s altercation with Satan in a latrine, portraying a foul-smelling but nonetheless commonplace location as the abode of demonic spirits.[12] And even for the illiterate, there was the symbolism of sin in medieval artwork: anal imagery, such as a fourteenth-century English illustration depicting the very embodiment of evil himself inverting the natural order, with a face on his bottom, at the source of excrement from which so much filth and stench came forth (see appendix 1).[13] These examples serve to illustrate two things: firstly, there were various different ways to communicate theological ideas to the laity beyond sermons; and, secondly, that this totality of sensory experience, through sermon, literature and art, helped to bring the ubiquity of sin to the front of the public consciousness, fostering what one can imagine was a near-constant reminder of pervasive human corruption, expanding the presence of ecclesiastical doctrine into ever more areas of life – even into the more intimate sphere of personal hygiene. But the Christian association of stench and filth with sin was not simply an abstract idea to be explained to the laity; it was also a principle along whose lines law and culture could be shaped, meaning that stench and filth actually helped to regulate aspects of Christian behaviour. Medieval public health regulations targeted supposedly hazardous filth: hence the 1385 national ordinance which demanded the removal of garbage from England’s streets in order to purify “corrupt air”,[14] or the 1439 law in Lynn which required butchers to dispose of waste in the River Nar at low tide so as to improve public hygiene.[15] Stench, too, had to be regulated: in thirteenth-century Rome, for instance, Pope Gregory IX legislated the removal of stenches from the streets, concerned both for spiritual and health hazards.[16] Later laws in Pistoia legislated a minimum ditch depth for the burial of plague victims, “in order to avoid the foul stench that the bodies… give off.”[17] David Carr’s analysis of English butchery regulations concluded that anti-filth legislation was primarily motivated by the civic pride attached to a clean city, and, especially in the fourteenth- and fifteenth-centuries, anxieties over the plague.[18] But, Carr’s conclusion can be taken a step further: since filth and stench were closely associated with sin, there was an inherent moral dimension to these laws too. Scholarly writings on plague legislation would substantiate this, like the plague tract of Thomas Forestier. He warned of the diseases carried by privy water, which infected food and drink, before imploring the reader: “Let every man that loves God and his neighbour amend these things.”[19] According to this source, then, the need to clean medieval streets and purify the air was not just a matter of civic duty, but of Christian devotion: the association of filth and stench with sin meant that the fight against public health hazards was an act of piety, lending moral urgency to the problem. Public health legislation in medieval towns therefore appears to be heavily based on the idea that sin and stench were analogous, with compliance with these laws encouraged through appeals to religious devotion. It is another example of the discourse around sin extending further into the lives of Christians, here being the basis for legal practice in major issues of public health. If stench, filth and sin were key in constructing legal norms in later medieval Christian communities, then it would not be unreasonable to assume that social norms, too, were influenced in some way. Indeed, some sources give an insight into how the spiritual dimensions of smell were used to both define and construct what may be termed the wider Christian community. Records of a complaint made in Barcelona in 1330 are indicative: an overflowing sewer in the Jewish quarter intruded into a Christian neighbourhood, provoking a scribe who complained that the stench and filth offended the Virgin of the Pine, patron of the parish church.[20]The contrast he makes is clear: on one hand, purity, fragrance, and Christian piety; and on the other, stench, filth and Jewish sinfulness. Nor was this the only example of Jews being characterised as smelly or filthy. The folk tale of a Jew who drowned in a latrine was widely retold in ordinary as well as elite contexts, suggesting this trope had real purchase across the social spectrum. According to the story, the struggling Jew appealed to Christian bystanders for help on the grounds that it was the Sabbath; but since the Christian Sabbath falls on a different day, no help was given.[21] Thus, because of his unbelief, the Jew is condemned to wallow and die in stench and filth – and sin. In both cultural and social contexts, therefore, the association of stench and filth with sin emphasised the difference between in-group and out-group, strictly defining the borders between Christian and other communities, and exacerbating social conflict where these groups came into contact. That the stench of waste was and is capable of provoking such visceral reactions of disgust would presumably serve only to increase anti-Semitic resentment. The social effects of stench and sin did not just concern outsiders, however; they were also used to justify hierarchies of power within the Christian community itself. The diminished status of women (both social and economic) in medieval Christian Europe is a commonplace, explained by the original sin of Eve in the Book of Genesis, which suggested women were generally more prone to sinning. Entertaining tales like Du Con qui fu fez a la besche gave new meaning to this structural inequality, asserting that, during the creation of woman, the devil had farted on Eve’s tongue, and thus all women henceforth were regurgitating demonic flatulence.[22] Women’s idle gossip was portrayed as sinful through the illuminating imagery of stench and filth. Regardless of the humorous nature of such tales, the systematic dismissal of medieval women would likely have been reinforced by appealing to the senses in this way, invoking feelings of revulsion, just as was done with Jews and non-believers. Filth and sin therefore could be employed in the justification of the most unforgiving ravages of medieval Christianity, on the grounds of proclivity to sin. However, it is important to temper this account with a degree of nuance. It has been argued that the later medieval church associated sin with filth and stench in order to create a universal analogue for immoral conduct which all could relate to, facilitating the extension of theological doctrine ever further into the legal, social, and cultural dimensions of Christian communities. But it must also be acknowledged that this was not necessarily universally received; the transmission of these ideas was not one-way traffic from the elite to the laity, who unquestioningly accepted this role which was ascribed to filth. Whatever the desires of the church, the fact is that many people associated stench not with sin but with commerce and prosperity: leather was stained with pigeon-droppings in English workshops;[23] the putrid odours of butchers’ slaughterhouses polluted the streets of medieval towns;[24] and rural farms across Europe depended on dung to fertilise their fields – so much so that an eleventh-century text on estate workers’ rights asserted the farmers’ entitlement to dung.[25] With so many crafts and by extension livelihoods dependent on the positive applications of filth, it is difficult to maintain that filth and stench were incontestably tied to the idea of sin, at least not in the minds of the many who profited from them. It also must be noted that in the popular culture of the illiterate laity, filth and stench had a comical as well as a spiritual dimension. We know, for example, that later medieval theatre often featured the devil, whose entrances and exits would be announced by loud farting, designed for humorous effect.[26] Though these examples do clearly conform to the notion that foul odours were symptoms of demonic presence, it is doubtful that the comical onstage flatulence would inspire a great deal of serious reflection of mankind’s propensity to sin. Compounding this, there is the overwhelming evidence of casual lay attitudes toward the filth populating their streets. Narrative jokes such as that of the farmer who fainted at the smell of a perfumery and could only be revived by the stench of dung suggest filth was not universally considered a bastion of filth; and place names like Shiteburn Lane in Winchester and Shitewell farm in Warwickshire could be seen as part of a wider diminution of the seriousness with which Christianity regarded filth.[27] Do these examples necessarily undermine the notion that stench and filth were associated with sin, or can the differing lay and elite attitudes be reconciled? It could conceivably be argued that the two are compatible, for just as the church appealed to the disgust which filth and stench provoke in order to expand the discourse around sin, it is plausible that this also augmented the significance of filth and stench in people’s everyday lives – hence the street names and comical stories. That people derived a living from foul-smelling trades is not a contradiction of Christian doctrine, but rather an acknowledgement of sin’s inescapability. Stench and filth were not a taboo in medieval society precisely because they were associated with sin, not in spite of it; and it is perhaps for this reason that the church was so successful at expanding the public discourse around sin through analogy with something so ubiquitous, so viscerally disgusting, and so utterly unavoidable as stench and filth. In conclusion, it was obviously within the church’s official interest to discourage sin wherever it prevailed; and it is to this end that the association of stench and filth with sin can best be understood. With conceptual roots in Biblical passages and theological ideas, the association between stench, filth and sin was made by literate elites within the church, communicated to the laity via sermons and writings which linked the theological with the physical so as to make church doctrine relatable. This association had implications in the legal, social, and cultural spheres of later medieval life, constructing hierarchies, boundaries, and divisions which were maintained by tapping into the sensory experience of Christians – primarily, their sense of smell. Through analysis of the sources in which reference is made to the spiritual dimensions of stench and filth, it can be plausibly argued that later medieval Christianity made people more aware of and in touch with the pervasiveness of sin, as well as more conscious of filthy and foul-smelling phenomena in their own lives. This is not to say that the later medieval church made people view stench and filth in a certain way, nor that the church had supreme control over how people understood sin in their lives; rather it is simply to state that powerful figures within the church attempted to do so, and that these ideas were at least registered by ordinary Christians, if not universally accepted. Mark Connolly is in his fourth year of an MA in History at the University of St. Andrews. Notes: [1] See Kathryn Reyerson, ‘Urban Sensations: The Medieval City Imagined’, in Richard G. Newhauser (ed.), A Cultural History of the Senses in the Middle Ages (London, 2019), pp. 45-65, or Dolly Jørgensen, ‘Cooperative Sanitation: Managing Streets and Gutters in Late Medieval England and Scandinavia’ in Technology and Culture 49:3 (July 2008), pp. 547-567 [2] Martha Bayless, Sin and Filth in Medieval Culture: The Devil in the Latrine (New York, 2012), p. 29 [3] Susan Ashbrook Harvey, Scenting Salvation: Ancient Christianity and the Olfactory Imagination (Berkeley, 2006), p. 202 [4] William of Malmesbury, Gesta Regum Anglorum, ed. and trans. R. A. B. Mynors, R. M. Thomson and M. Winterbottom (Oxford, 1998), pp. 398-401 [5] Adam of Eynsham, Vision of the Monk of Eynsham, cited in Katelynn Robinson, The Sense of Smell in the Middle Ages: a Source of Certainty (London, 2019), p.163 [6] Bayless, The Devil in the Latrine, p. 7 [7] Bridget of Sweden, The Revelations, cited in Robinson, Smell in the Middle Ages, p. 204 [8] Pierre Bersuire, Reductorium morale, cited in Robinson, Smell in the Middle Ages, p. 196 [9] Jacobus de Voragine, Sermones quadragesimales cited in Robinson, Smell in the Middle Ages, p. 188 [10] Robinson, Smell in the Middle Ages, p. 201 [11] Geoffrey Chaucer, The Canterbury Tales: Seventeen Tales and the General Prologue, ed. V. A. Kolve, and Glending Olson (New York, 2018), p. 333 [12] Anon., Les cent nouvelles nouvelles, cited in Bayless, The Devil in the Latrine, p. 7 [13] Bayless, The Devil in the Latrine, p. 77 [14] Reyerson, ‘Urban Sensations’, pp. 60-61 [15] David R. Carr, ‘Controlling the Butchers in Late Medieval Towns’, The Historian 70:3 (2008), p. 458 [16] Robinson, Smell in the Middle Ages, p. 117 [17] Anon., ‘Ordinances of Sanitation, Pistoia (Italy), 2 May 1348’, cited in Jessica Goldberg, ‘Pistoia, Ordinances for Sanitation in a Time of Mortality. May 1348’, available at: http://middleagesforeducators.com/wp-content/uploads/2020/05/1348-Ordinances-of-Pistoia.pdf [18] Carr, ‘Controlling the Butchers’, pp. 460-461 [19] Thomas Forestier [untitled], cited in Carole Rawcliffe, Urban Bodies: Communal Health in Late Medieval English Towns and Cities(Woolbridge, 2013), p. 188 [20] Bayless, The Devil in the Latrine, p. 21 [21] Ibid., p. 158 [22] Ibid., p. 82 [23] Ibid., p. 40 [24] Reyerson, ‘Urban Sensations’, p. 46 [25] Bayless, The Devil in the Latrine, p. 42 [26] Valerie Allen, On Farting: Language and Laughter in the Middle Ages (New York, 2007), p. 92 [27] Bayless, The Devil in the Latrine, p. 31, pp. 59-60
- Early Modern Catholicism: Not as European as Once Imagined
The Catholic Reformation, or Renewal began with the Council of Trent 1545 to 1563, and endeavoured to update the Catholic Church to an increasingly changing world, which included the establishment of colonial empires, economic expansion and the challenge posed by the Protestant Reformation.[1] One of the most significant achievements of the Catholic Renewal was the spread of Christianity outside of European borders, in Asia and the Americas, which started before 1545 due to the foundations of new religious orders and continued throughout the sixteenth and seventeenth centuries. By the end of the seventeenth century Catholicism was a truly global religion, but not in the form Rome wanted and certainly not as successful as the Catholic Reformation aimed. This was because of various reasons, such as evangelical methods of accommodation, particularly pursued by Jesuits, which led to syncretism; and urgency in baptising new converts coupled with a lack of education on Christianity. There is no denying that missionaries were successful in spreading Christianity to numerous communities throughout the world, from Mexico to China. But the factors mentioned above suggest that while traditional European Catholicism was not established globally, it was moulded by the customs of the host communities it reached and it’s these forms of the religion that remained significant globally. Accommodation, or the adaption of the customs of host societies, was a vital strategy employed by religious orders such as the Jesuits in their missions. By engaging with foreign communities and adopting some of their customs, such as the way they dressed or the languages they spoke, missionaries reduced the sense of ‘otherness’ and thus created an environment in which Christianity was not seen as a foreign and unexplainable concept but instead a viable and beneficial option for the indigenous population that they were interacting with, enabling Catholicism to become a local reality. This is clearly seen in the Jesuit mission in Madurai, India 1606 which was led by Robert De Nobili, a pioneer of the accommodationist method. At that time, the non-Christian population perceived Christianity as an inept and low caste religious practice, indicating the lack of acceptance and desire for conversion.[2] This can be credited to Portuguese colonial expansion, as the introduction of a new and foreign authority displeased native communities. De Nobili saw the need to disassociate with the Portuguese and customise Catholicism to fit in with conventional Tamil ideas about personal holiness and Brahmanical normative precepts.[3] For example, he emphasised the similarities of precepts such as abstaining from stealing, lying and sexual misconduct with the Ten Commandments.[4] This was a method he used often, and it wasn’t limited to scriptures. De Nobili also dressed in Brahmin robes and wore the three-stringed thread that Tamil men often sported, interpreting it as a representation of the Holy Trinity.[5] This presentation of Christianity made it less unusual and more accessible, allowing it to gain support and therefore increase conversions because people did not have to give up their cultural roots or customs. The implementation of this method in numerous locations, such as China, succeeded in establishing Catholicism within different communities worldwide, making it a truly global religion. It is important to note that language was another accommodation technique that enabled Catholicism to take hold in populations. Mastering the vernacular, including Sanskrit and Tamil, enabled the Jesuits to gain a better understanding of indigenous beliefs and thereby more efficiently link it to their own and propagate it. The accommodationist principle was a means to weave Christian principles into local beliefs, allowing Catholicism, albeit in a slightly altered form, to become a global religion. Another location where accommodation methods proved to be successful and emphasise the importance of these strategies in enabling Global Catholicism is China. The evangelist work of Ricci from 1583 onwards interlinked Catholicism with the native beliefs of the society, in this case, the teachings of ancient Chinese sages.[6] The Jesuits determinedly preached that Catholicism is the one true religion and that its morals align with traditional Chinese culture. By presenting the religion in this way alongside Ricci’s impressive knowledge of written classical Chinese and printed works, the missionaries attracted curiosity and gained respect for their social performance.[7] The outcome of this led to the number of converts increasing from 2,500 in 1610 to 40,000 in 1636.[8] However, due to the political instability in China, the Ming dynasty was toppled by uprisings, hampering conversions in the 1640s to 1660s. While it would be expected that the unsteadiness of the domestic situation would harm a minority religion, this seemed to have little effect on Catholicism in China because the Catholic Church remained stable with 60,000 to 80,000 members.[9] This highlights how despite domestic crises Christianity continued growing, accentuating the extent to which Catholicism was successfully embedded in China and reinforcing its position as a global religion. A widely debated aspect of evangelism in the early modern period is the notion of syncretism, or the blend of religious beliefs. Due to the Jesuit’s use of strategies such as accommodation, it is no surprise that Catholicism in Asia or the Americas did not replicate Catholicism in Europe, specifically the Holy Roman Empire. A key challenge to the idea of Global Catholicism is that communities which adopted a mix of their own cultural beliefs with Christian ones were not ‘true converts’ and simply “Christians in name alone”, as MEP missionary Deydier referred to them in 1666.[10] This is reflected in the continuous conflicts between different religious orders in the sixteenth and seventeenth centuries. Despite attempting to put on a unified front, missionaries struggled with who they owed allegiance to and therefore, which authority’s methods they had to follow. As well as being a member of a specific religious order (Jesuits, Franciscans, Augustinians), they also had to keep their nation’s interest in mind, predominantly Portugal and Spain who were experiencing major imperial expansion. This meant that the Jesuits were often targeted by other religious orders for pushing the boundaries of religious beliefs too much, implying that the conversions they achieved were not valid due to the immense modification of Christian rituals to suit foreign customs. A key illustration of this is conversions in Japan. Between 1579 and 1582, missionaries under Valignano opened colleges in Japan to start training converts and used accommodationist principles, resulting in over 100,000 Roberto de Nobili converts.[11] This vast number has been credited to mass baptisms and Jesuits have been criticised for being quick at baptising but futile at indoctrination. After the 1614 edict banning Christians, an underground Christian sector called the Kakure Kirishitan (Secret Christians) was formed.[12] In 1865, French priest Petitjean assessed some of this sect’s key ideologies and believed that their religious ideas were Shinto rites wrapped up in distorted Catholic teachings, concluding that they weren’t really Catholics at all.[13] De Nobili presented a strong argument against this, maintaining that if certain pagan customs were allowed in Christian practices in earlier centuries, there is nothing wrong with foreign customs being allowed now. This is a compelling argument because it indicates how religion is continuously changing, principally significant during this period due to the Catholic Reformation’s aims to update the Church to the changing world. This non-Eurocentric approach suggests how Catholicism doesn’t need to follow the exact same outline as it does in Europe and that its just as valid adopted in another society, illustrating that it was indeed a global religion by the seventeenth century. Mexico is another informative case study that displays both the conflict between religious orders and the spread of Catholics beliefs in the New World. The first missionaries in Mexico were Franciscans, whose evangelisation methods were based on the religious history of Spain. Specifically, the mass baptisms performed on Muslims of Granada during the Reconquest were repeated in Mexico as well. Fray Motolinia claimed that in 15 years, they had managed to baptise 9 million converts.[14] The reasons behind these urgent baptisms involved their beliefs that the natives were essential to the Second Coming of Christ. Due to this millenarian view, the hastiness of the baptisms caused minimal pre-baptismal instructions, a method that was criticised by Augustinians who took a more conservative approach to missionary work by focusing on spreading traditional Catholic practices.[15] Study of indigenous language sources, suggests that the Franciscan methods proved to be ineffective at establishing traditional Catholicism and instead, the Catholic Church was moulded by the local culture of Mexican natives.[16] The combination of pre-Columbian religious practices with core Christian beliefs about salvation proposes that Catholicism did achieve the position of a global religion. It is important to note that syncretism or adapted forms of the religion are not necessarily indicators of failure. As Clossey argues, while global Catholicism may be diverse in nature due to the engagement it had with foreign communities, it was united in its core beliefs, which involved a Christian God and its morals. This Global Salvific Catholicism may not be in the orthodox form practiced in Europe,[17] but the complex interactions between natives, missionaries and their evangelical strategies led to Catholicism being embraced in multiple foreign locations worldwide. There are places where missionaries failed to achieve their evangelist aims. These provide insight into ineffective methods and the difference between localities, such as courts and countryside, and thus are important to discuss. A key example of this is Acquaviva’s evangelism in Mughal Emperor Akbar’s court in the 1580s compared to the rural Salsette peninsula. Acquaviva’s strategy at court was to establish a personal friendship with Akbar and increase his attachment to the Jesuits, eventually leading to his conversion. What Acquaviva failed to notice was that Akbar’s intentions behind his friendly attitude towards the Jesuits was to launch his political ideology of sul-i-kul, or universal religious tolerance, thus explaining why Jesuits were sought after at his court.[18] A crucial lack of judgment from the Jesuits meant that they mistook this for a sign of early success, highlighted by Acquaviva’s letters sent back to the Society of Jesus. In these he mentioned his good relations with the King and stressed Akbar’s desire for churches to be built in his Kingdom. Although this suggested that Catholicism was gaining ground, Akbar actually stated that he wanted many religious institutes to be built, due to his tolerant attitude of all religious cults.[19] The Jesuit failure was even more present beyond the Mughal court, particularly in the rural region Salsette. De Souza suggests that the mob attack in 1583, that killed Acquaviva and other Jesuits, occurred because locals were angry at the threat that Christianisation posed to their economic circumstances.[20] Alongside this, Acquaviva focused on learning Persian to translate doctrines for the King and while this was useful at court, it neglected the majority of the population that did not speak Persian.[21] On their visit to Salsette, the Jesuit’s lack of knowledge about the local situation and the vernacular meant that they failed to establish Catholicism in this area. However, it is crucial to not conflate these instances to an outright failure of Catholicism as a global religion. While it was not welcomed and adopted in all communities, the overall increase of converts and Catholic communities in Asia or America highlight how missionary work was often successful. Ultimately, Catholicism was a truly global religion by the end of the seventeenth century, but not in the form that Rome was intending. In order to increase the appeal of Christianity and gain converts, missionaries had to resort to methods that made the religion a more viable and desired option. This meant endorsing Catholicism as a religion that works in a non-European environment. Whether that occurred through taking quite extreme measures as De Nobili and Ricci did in Asian communities or even just learning the vernacular, it’s clear that the success of Catholicism in numerous locations was because of the interactions between missionaries and the host society. The various case studies discussed in this essay suggest that religious orders achieved a remarkable feat by cementing Catholicism in very different cultures that had rich histories and traditions for centuries. While this may have produced belief systems that weaved indigenous customs in with Christianity, it’s a form of Catholicism nonetheless. Essentially, local Catholic communities should be seen as specific expressions of Catholicism rather than a distorted version that doesn’t count as part of the global Catholic Church. Through colonisation as a trigger for evangelism, determined missionaries, diverse evangelist methods and the dialogue between religious orders and different localities, Catholicism achieved the position of a global religion by the end of the seventeenth century. Tanya Singh has just completed her second year of a BA in History at the University of Cambridge (Murray Edwards College). Full title when assigned: ‘By the end of the seventeenth century, Catholicism was a truly global religion.’ Discuss. Notes: [1] Robert Bireley, ‘Redefining Catholicism: Trent and Beyond’, The Cambridge History of Christianity, Vol. 6 (2007), pp. 145-50. [2] I. G. Zupanov, ‘Compromise: India’, in Ronnie P. Hsia (ed.), A Companion to the Reformation World (Oxford: Blackwell, 2004), p. 363. [3] Ibid. [4] Ibid. [5] Ibid., p. 369. [6] R .P. Hsia, ‘Promise: China’, in Ronnie P. Hsia (ed.), A Companion to the Reformation World (Oxford: Blackwell, 2004), p. 376. [7] Ibid. [8] Ibid. [9] Ibid., p. 377. [10] T. Alberts, ‘Catholic Missions to Asia’, in Alex Bamji et al. (eds.), The Ashgate Research Companion to the Counter-Reformation(Farnham: Ashgate, 2013), p. 136. [11] H. V. Thanh, ‘Funding the mission’, in Bernard Heyberger et al. (eds.), Catholic Missionaries in Early Modern Asia (Routledge, 2020), p. 113. [12] Ibid., p. 114. [13] Alberts, ‘Catholic Missions to Asia’, p. 127. [14] M. Christensen, ‘Missionizing Mexico: Ecclesiastics, Natives, and the Spread of Christianity’, in Ronnie P. Hsia (ed.), A Companion to the Early Modern Catholic Global Missions. (Brill, 2018), p. 24. [15] Ibid. [16] Ibid., p. 20. [17] Alberts, ‘Catholic Missions to Asia’, p. 136 [18] I. G. Zupanov, ‘Between Mogor and Salsette’, in Bernard Heyberger et al. (eds.), Catholic Missionaries in Early Modern Asia(Routledge, 2020), p. 56. [19] Ibid., p. 52. [20] Ibid., p. 60. [21] Ibid., p. 57.
- A Unifying Force for Christianity and the Roman Empire: The Council of Nicaea
The Council of Nicaea was the first ecumenical council of Christendom, held from May to August of AD 325. Emperor Constantine the Great convened the council with the primary aim of forging unity amongst Christians on theologies and other disputes. Issues throughout the Empire had been developing as theological matters concerning the Godhead became more divided. The Nicene Creed produced at the Council was the first uniform doctrine of Christianity with the aim of unification; however, the subsequent years of divisions suggest otherwise. The Creed’s content and language exacerbated the divisions between different camps, leading to further ecclesiastical arguments. The Arian controversy, primarily in Alexandria, fuelled these divisions which centred around the natures of the Godhead. The Niceno-Constantinopolitan Creed of 381, instigated by Theodosius I, was a turning point in unifying Christianity and the Empire because the Creed generally became accepted as the foundation for orthodox Christianity. Consensus building movements had contributed to a growing pro-Nicene theology throughout the Empire but the imperial sponsorship it received from Theodosius was the legislative push required to become a unifying force. Historians often treat the Council of Nicaea as a prelude to the following ecclesiastical conflicts, but recently there has been an increasing number of studies on the Council itself and its implications for Christianity. The Council of Nicaea ultimately had a strong unifying effect for orthodox Christianity and the Empire in establishing a uniform doctrine, but the short-term effects were much more divisive and discordant. Constantine the Great convened and financed the Council of Nicaea in AD 325 under the advisement of Hosius of Corduba, who presided over the Council. After a synod earlier in the year that examined the divisions in Christianity perpetrated by Arianism, it was concluded that a council was needed to gain consensus on theological matters. The general number adopted by our sources for bishops in attendance is 318, but varying numbers have been cited in some texts. Athanasius[1], Evagrius[2] and Hilary of Poitiers[3]among others claimed that 318 bishops attended and then the Eastern and Coptic Orthodox churches stated this number in the liturgies thus it is commonly followed. The actual attendance for the Council was much larger as it was composed of priests and deacons as well as bishops, with the majority travelling from the Eastern sees. There is a lack of official records on the Council’s proceedings thus it is complicated to reconstruct reliably so there is a dependence on narratives from those present who had their own agendas[4]. For example, Eusebius of Caesarea’s narrative was impacted by the affiliation he had with the Emperor (Life of Constantine, 3.10-12). As for the Nicene Creed, it is believed that there were copies kept in the major sees of the Empire due to the citations by Basil, Cyril, and Ambrose; the creed was not necessarily intended for the laity at the time of production, however. The Nicene Creed was a declaratory creed rather than a liturgical one and thus its intention was to state the faith in its principal tenets rather than in a manner that would be read as worship in churches. The Creed did not satisfy many attendees due to the extreme differences in theologies; it was viewed either as too latitudinarian due to the unscriptural terms or too rigidly anti-Arian. It has been speculated whether the issues with the Creed could largely have been avoided through changing the grammar rather than the theological content due to the widely accepted guidelines for speaking about divinity, including scriptural language.[5] However, due to the state of Christendom prior to the Council, the ensuing events were likely inevitable due to the core ecclesiastical divisions that became increasingly apparent. The preceding years to the Council are recounted by Eusebius of Caesarea in ‘Life of Constantine’ but the theological epoch is pieced together through other ecclesiastical historians such as Rufinus, Socrates and Theodoret. Constantine was aware of the issues throughout his empire preceding the Council of Nicaea which had been festering since the beginning of the Arian controversy. Arius, an Alexandrian presbyter, had strong beliefs about the Godhead which opposed what became orthodox Christology; he did not believe in coequal Trinitarianism which stated that God the Father, God the Son and God the Holy Spirit were distinct persons sharing a single essence. Instead, Arius’ teachings followed that the Son was ‘created’ (Arius, Letter to Alexander of Alexandria 2-5), where the Nicene Creed states that the Son was ‘begotten, not made’ (The Creed of Nicaea). The Creed was mostly dedicated to this issue of the Godhead’s nature which demonstrates how significant the theology was. Arius’ doctrine drew from the writings of Origen, active around the 2nd and 3rd centuries AD, in some respects with the subordination of the Son to the Father but their theologies were not identical.[6] The empire was also in a particularly volatile period because of social developments in the Church; Alexandria was slowly shifting to a monarchical episcopacy therefore the Council occurred at a liminal time when hierarchies and independencies were still being established. The Meletian schism in Egypt in the early 300s also acted as a prelude to the events at the Council of Nicaea and validates how divided Christendom was in the early 4th century. Socrates has emphasised how Constantine’s primary aim through the Council was to foster peace above all else (HE 1.10) and it becomes clear why when looking at the extensiveness of divisions in his empire. While some historians often portray the Emperor as naïve when assessing his motives, Elliott makes pains to emphasise that Constantine did not expect the whole of Christendom to accept the Creed without question, even if he did hope. Particularly his concern lay with the Alexandrians due to this being the birthplace of the Arian controversy.[7] Furthermore, the sheer size of Christendom by AD 325 meant that any attempt to mandate an empire-wide orthodox theology was ambitious. Willian Frenkel has commented on the historiography of the Council of Nicaea, stating that it is centred around literary and polemical accounts from the clergy. This has created a multi-faceted narrative which, while historically significant, lacks explanations for those in the empire less involved with synods and imperial laws.[8] Constantine was correct to be cautious, as is proved by the subsequent years of Christological divisiveness exacerbated by the Nicene Creed. The Nicene Creed defined the nature of the Father and the Son as consubstantial and of the same essence, which essentially marked Arianism as heresy, with additional anathemas more specific to the exclusion of Arianism. Conflicting testimonies on the origins of the Creed exist, as some believe that it stems from Eusebius of Caesarea. He claimed that he recited the creed of a Church of Palestinian Caesarea which the council then adapted with the addition of the term ‘homoousios’ (Appendix to Ath. Decr.) but most modern scholarship sees this as unlikely due to creedal discrepancies. The most controversial aspect of the Nicene Creed which created the antithesis of unity in the Empire was this ‘homoousios’ term. Homoousion was used to describe God the Son as the same in being with God the Father, later applied to the Holy Spirit as well, and its introduction into orthodox Christology created tension. As the term was Greek, even more moderate Origenist bishops were fearful of its inclusion because its unscriptural nature seemed pagan. The Nicene Creed and its supporters were criticised by some centrists of being Sabellian, a form of modalism viewed as heresy. It became apparent in the immediate years following the Council that the Creed, which had been crafted with unity in mind, was spiralling into further discord. Most in attendance of the Council adhered to the Creed but, as this equated to endorsing the Homoousian position, some were reluctant to. Arius, Theonas of Marmarica and Secundus of Ptolemais were exiled and excommunicated while the supporters of Arianism were considered as “enemies of Christianity”.[9] This arguably worsened the ecclesiastical relations of the Empire and strayed from Christological unification. The Nicene Creed saw limited initial engagement and the decisions of the Council were slowly reversed as the Arians and Meletians regained most of what they lost. For example, Arius’ exile was revoked in the autumn of AD 327 by Constantine himself and the Eastern empire ended his condemnation in AD 336.[10] Constantine’s stance had changed drastically because directly after the Council in October 325 he banished Eusebius of Nicomedia and Theognis of Nicaea for associating with Arians and criticised Eusebius of depriving his people of the faith.[11] In the years following the Council, two camps emerged and controlled most of the ecclesiastical world. The alliance of Eusebius of Caesarea and Eusebius of Nicomedia supported the teachings of Arius, despite their lack of support during the Council, and argued that the Son was different in ousia (essence) than the Father and not eternal. The other alliance was headed by Alexander and Athanasius of Alexandria who were distinctly anti-Arian. Athanasius succeeded Alexander in AD 328 and his account ‘De Decretis’ made clear his stance that one either agreed with the Council or with Arius, there was not an in-between. Historical opinions on the groups which emerged differ, Sarah Parvis sees the Eusebians as a distinct organised camp whereas Mark DelCogliano views them as more of an “ad-hoc centrist alliance” that simply contested the extremes of anti-Arianism and Sabelliansim.[12] The two camps had opposing theologies on the substance of the Godhead, and this dictated the divisions; the First Synod of Tyre in 335 saw how disagreements infiltrated secular issues in the Empire as well as religious. Athanasius experienced five exiles throughout the four decades following the Council of Nicaea, due to the theological gulf in Christendom which was perpetuated by the Council. There were also many attempts at conciliar creed making to either replace or improve the Nicene Creed, such as the Creed of the Long Lines in 344. These credal texts became increasingly convoluted as each theology attempted to create the superior text which made it a challenge for Nicaea to hold authority. The role of the emperor impacted the reception of the Council of Nicaea because during Constantine’s reign, while the Creed was not necessarily followed, the Council was generally respected. However, after his death in AD 337, this perspective deteriorated into attempts to replace the creed with a more satisfactory orthodoxy in the 340s. When Constantius II became the joint ruler with Constans and Constantine II, he opened his reign by disregarding canon 15 produced at the Council of Nicaea which forbade bishops from moving sees by allowing Eusebius to transfer from Nicomedia to Constantinople. Furthermore, after he became sole ruler in AD 353, he attempted moving the Church towards a more middling position with a series of councils. The Council of Rimini in 359 and of Seleucia in 360 saw his effort to impose his own semi-Arian views on the Empire. The Dated Creed, also known as the Nike Creed or Ariminum Creed, in theory banned the Creed of Nicaea which used the ousia language and claimed that this language was used illogically in the Creeds. Constantius had to enforce the acceptance of this creedal text because, despite unease about the homoousios term, the Nicene Creed still had supporters in Christendom. In the midst of the Arian reaction to the Council of Nicaea, there was a growing consensus building movement for the Nicene Creed from the 350s. The movement emerged more in the 360s but truly was at its height in the 370s, a strict change from the ecclesiastical divisions in the early years following the council. Mark DelCogliano has observed the growth of this movement throughout the 4th century that was primarily fuelled by Athanasius. He cites the Synod at Antioch in 341 as the first real attempt at building a consensus as the approved creeds upheld orthodoxy while avoiding the more extreme theologies and controversial Nicene language. The Second Creed was produced here, and later developed into the Fourth Creed which was recognised as the statement of faith by the eastern episcopacy at the Council of Serdica in AD 343.[13] A tendency of these early post-Nicene creedal texts was to be as minimally anti-Arian as possible to reduce opposition and then anathematise specificities. There was a renewed attempt to pursue religious unity, especially between the eastern and western Empire. The pro-Nicene movement was driven by Athanasius and Gregory of Nazianus, whose remarks concerning a church council demonstrate the divided clergy which “not even a ruler backed by reverential fear and authority” could have resolved (Gregory of Nazianus, Concerning His Own Life 1680-9). Gregory mainly garnered support in Constantinople through his rhetoric while Athanasius focused on the inclusion of the ousia language, which he believed was a necessity in comprehending the nature of God. Athanasius maintained the authority of the Council of Nicaea by circulating the creed and defending its language in the face of much opposition. The slow progress of a pro-Nicene consensus continued throughout the mid 4th century, but the turning point arrived with the reign of Theodosius I. When he became emperor in AD 379, the movement was able to legislate their efforts with his support of the Nicene Creed as Christological orthodoxy. The long-term impacts of the Council of Nicaea witness much more of a unifying force than the short term effects. Theodosius’ rule ratified the pro-Nicene consensus; he began his reign by arriving in Constantinople in AD 380 and exiling the Arian bishop Demophilus and placing Gregory of Nazianus as the de facto archbishop while he set to initiating de jure change. The second ecumenical council, the Council of Constantinople in AD 381, was convened and financed by Theodosius. The Council’s primary act was the confirm the Nicene Creed with a few revisions, resulting in the Niceno-Constantinopolitan Creed. The Eastern Empire particularly had been divided into factions, with even intra-faction divisions, and thus an imperial push for unity was necessary. The Creed of 381 was a restatement of the Creed of 325 with a few key differences. The later creedal texts added more concerning the Holy Ghost, among other clauses that had been established in older creeds. The Creed was confirmed at subsequent councils in Ephesus, Constantinople and Chalcedon in AD 431, 448 and 451 where the synods recognised that new revisions were required to sustain the developments of new theologies and heresies (The Council of Chalcedon, Definition of the Faith). However, they are clear to state that the Niceno-Constantinopolitan Creed is seen as a revision of the original creed, therefore the unity it promoted is ultimately attributed to the Council of Nicaea.[14] With the assistance of Theodosius’ patronage, adherence to the Council of Nicaea became standardised and integrated into the basic understanding of orthodox Christianity. The focal point of ecclesiastical debates shifted to other matters while the Niceno-Constantinopolitan Creed became a cypher to which theologies referred to.[15] Theodosius’ reign played an undoubtedly significant role in ratifying the Nicene Creed; convening the Council of Constantinople, issuing imperial edicts, confirming that Nicene orthodoxy was the official religion of the Roman Empire and mandating an empire wide adherence to it ensured that unity was maintained. DelCogliano suggested that one of the primary reasons that Christian orthodoxy now follows a pro-Nicene theology is because Theodosius had the time to legislate the theology he endorsed. Had Constantius II lived longer, he could have secured his semi-Arian orthodoxy and the Empire could have followed a different path.[16] As Young Richard Kim states, Christianity is written by the powerful much in the same way that history is, and thus the unifying force that succeeded was that of the Emperor who had the opportunity to secure the theology he subscribed to.[17]While the long term effects of the Council of Nicaea are a unifying force for Christianity, variation in the Creed was still apparent even in pro-Nicene groups. The Armenian Church follows the Chalcedonian Definition of the Creed, and debates continued into the 5th century. However, these debates were much less divisive as they focused more on the interpretation of the Creed rather than its validity.[18] Ultimately, the Council of Constantinople was much more of a strong unifying force for Christianity and the Empire than the Council of Nicaea, but their purposes were very similar. The Council of Nicaea of AD 325 had a mixed effect throughout the 4th century. The immediate result was one of divisiveness as competing ecclesiastical parties pitted their theologies against each other and the Nicene Creed only exacerbated this with its controversial language. Despite most bishops confirming the Creed, the following decades witnessed tumultuous ecclesiastical debates as opposing camps gained momentum. Emperor Constantine’s primary aim in convening the Council was to promote unity and peace among the Church but the resulting schisms show this was unsuccessful until the Council of Constantinople in AD 381. Arianism remained prevalent throughout the 4th century despite the efforts of pro-Nicene consensus building movements spearheaded by Athanasius of Alexandria, partially due to the semi-Arian stance taken by Constantius II. The support that the pro-Nicene theology received from Theodosius aided its journey to orthodoxy with imperial legislature. Fundamentally, the Council of Nicaea did have a strong unifying force for Christianity and the Empire because it was adopted as orthodoxy and maintains an almost unchallenged status. However, while it ended as a unifying force, the divisiveness it perpetuated throughout the Church and Empire in the 4th century must be taken into account when understanding the effects of the Council of Nicaea. Molly Davies is currently pursuing a BA in History (2nd year) at the University of Manchester. Notes: [1] Athanasius of Alexandria, Ad Afros Epistola Synodica [2] Theodoret, 3.31 [3] Contra Constantium Augustum Liber [4] Mark Smith, The Idea of Nicaea in the Early Church Councils, AD 431-451 (Oxford, 2018), p. 9. [5] Lewis Ayres, Nicaea and its Legacy: An Approach to Fourth-Century Trinitarian Theology (Oxford, 2004), p. 13. [6] Ibid., p. 20. [7] T.G. Elliott, Constantine and the Arian Reaction after Nicaea (Cambridge, 1992), p. 169. [8] Luise M. Frenkel, ‘The Reception of the Council of Nicaea by Ethnic Minorities in the Eastern Roman Empire’, Annuarium Historiae Conciliorum, Vol. 49, No. 1 (2020), p. 15. [9] Philip Schaff, History of the Christian Church, Section 120 (1910) [10] Sara Parvis, ‘The Reception of Nicaea and Homoousios to 360’, in Young Richard Kim (ed.), The Cambridge Companion to the Council of Nicaea (Cambridge, 2020), p. 225. [11] Elliott, Constantine, p. 170. [12] Karl Heiner Dahm, ‘The Council of Nicaea – (Y.R) Kim (ed.) The Cambridge Companion to the Council of Nicaea’, The Classical Revies, Vol. 71, No. 2 (2021), p. 523. [13] Mark DelCogliano, ‘The Emergence of the Pro-Nicene Alliance’ in Young Richard Kim (ed.), The Cambridge Companion to the Council of Nicaea (Cambridge, 2020), pp. 256-9. [14] Mark J. Edwards, ‘The First Council of Nicaea’, in Margaret M. Mitchell and Frances M. Young (ed.), The Cambridge History of Christianity (Cambridge, 2006), p. 152. [15] Mark S. Smith, The Ida of Nicaea in the Early Church Councils, Ad 431-451 (Oxford, 2018), p. 3. [16] DelCogliano, ‘The Emergence’, p. 274. [17] Young Richard Kim, ‘Introduction’ Young Richard Kim (ed.), The Cambridge Companion to the Council of Nicaea (Cambridge, 2020), p. 6. [18] Edwards, ‘First Council of Nicaea’, p. 154.












