Cynicism As Anti-Philosophy

Cynicism is not the same as cynicism. Cynicism with a capital ‘C’ refers to the truth-affirming provocations of the ancient Cynics and the specific mode of being of which they are an early representation; while cynicism with a small ‘c’ is, in its ‘postmodern’ form, ideological apathy towards truth and its ramifications for politics and culture. Some prefer to write Cynicism as Kynicism to further emphasize the difference. For now, I shall stay with writing a capital ‘C’ to refer to the concept in question.

What I am about to list as central to Cynicism is the product of a creative interpretation of doxographic and other material that I consider potentially useful for critical theoretical reflections on law, politics and society. Other writers will include different propositions and will have different emphases.

With that in mind, I will briefly cover Cynic elements in respect of style, theory, politics, and self-identity, which I translate to the following headings: parrhesia and embodied truth, antiphilosophy, antinomianism, and cosmopolitical subjectivity.

Parrhesia and Embodied Truth

The most celebrated relation of the Cynics to truth is parrhesia. As Foucault succinctly put it, when a speaker engages in parrhesia, he ‘uses his freedom and chooses frankness instead of persuasion, truth instead of falsehood or silence, the risk of death instead of life and security, criticism instead of flattery, and moral duty instead of self-interest and moral apathy.’

The early reference point is Diogenes the Cynic (circa 3rd–4th BC). Diogenes fully embraced the appellation Cynic (Kyon=Dog, whence also kynicism), introducing himself as such to Alexander the Great. When Alexander asked what he had done to deserve such a name, he replied, ‘I fawn on those who give me anything, I yelp at those who refuse, and I set my teeth in rascals.’

He saw himself as the kind of dog that all like to praise, but with whom no one dared go hunting.

He poured scorn on his contemporaries: he called the school of Euclides bilious, Plato’s lectures a waste of time, and the demagogues the mob’s lackeys.

When he saw someone being led away by temple officials for stealing a bowl, he quipped: ‘The great thieves are leading away the little thief.’

On hearing Plato’s definition of Man as a featherless biped, Diogenes presented a plucked fowl with the words ‘Here is Plato’s man.’

And when Alexander wished to honour him by granting any favour, Diogenes asked him to ‘[s]tand out of my light.’

Not surprisingly, he considered parrhesia ‘the most beautiful thing in the world’.

While parrhesia is not exclusive to Cynicism, it is worth noting that Cynic style is typified by the use of wit and humour. So again, for example, on his habit of continually masturbating in public, Diogenes quipped ‘I only wish I could be rid of hunger by rubbing my belly.’

Branham argues that the form of this humour acts as a ‘rhetorical syllogism,’ which invites the audience to discern the joke’s tacit premises and to infer a subversive truth from it, namely, that 1) natural desires are best satisfied in the easiest and cheapest way possible (euteleia); 2) one natural desire is the same as any other; 3) therefore cultural norms violate the ‘natural right’ to masturbate there and then in public.

Humour has the capacity to engage both intellect and an immediate, ticklish sensuousness. It has a material quality in drawing upon a rhetorical force beyond pure reason and also in the way it elicits an affect — a knowing smile, a cringe, a burst of laughter.

This brings us to another of the Cynic’s relations to truth, that of ‘bearing witness to the truth by and in one’s body, dress, mode of comportment, way of acting, reacting, and conducting oneself.’ This has traditionally included askesis where the worth of a frugal life — a dog’s life — is demonstrated by the strength and flourishing of the body. This is arguably different from the reactionary denial of the ascetic in popular consciousness.

In sum, Cynic truth is expressed, on the one hand, through witty, humorous, polemical and subversive rhetoric; and on the other hand through the Cynic’s authentic life, one lived in accordance with and as a didactic demonstration of truth. Cynic rhetoric reaches out to the mind and bodily senses while making of his own body a rhetorical device.

Antiphilosophy

With the Cynic emphasis on wit and performance instead of abstract theory, laughter rather than convention, free-spiritedness and risky provocations instead of the disciplinary structures of paradigmatic thought, there is a tendency to view Cynicism as a form of anti-intellectualism, despite a clear — albeit critical — interest in the intellectual pursuits of Platonic metaphysics.

Indeed, the Cynic enthusiasm for truth would, it seems, align them with the immense tradition of western philosophy, even if, as Hegel claimed, they had no traditional philosophy worthy of note. But if not philosophy, then what? I suggest we think Cynicism as an early form of antiphilosophy.

If one way to frame philosophy is in terms of its critical concern for truth and its articulation in theory, antiphilosophy, writes Badiou, deposes the category of truth, unravels the ‘pretensions of philosophy to constitute itself as theory,’ looks behind the fallacious mask of discursive appearances, and appeals against the philosophical act towards a radically new ‘supraphilosophical’ act.

Antiphilosophy is less a critique of truth than a therapeutics of truth. It is the cure for the self-satisfied belief of western philosophy in its ability to capture the meta-position of metaphysics, in being able to express universal truth without gaps, lacks, distances, contingencies, insufficiencies and/or a relation to particularities.

Badiou claims that for antiphilosophers like Nietzsche, Wittgenstein and perhaps Lacan, what is important is the ‘distance without measure’ (for example, between individual and subject, god and man, infinite and finite), which cannot be proved within a conceptual framework.

For Nietzsche, in particular, his testimony and self-evidence is expressed not only in what he says about philosophy but also — in his Dionysian abolishing of the world as truth — what he does to it.

It is not surprising to learn that Nietzsche also writes that: ‘the higher man must prick up his ears at every Cynicism — whether coarse or refined — and congratulate himself whenever a buffoon without shame or scientific satyr speaks out in his presence.’ Eschewing grand theory and the pretensions of metaphysics, Cynic antiphilosophy combines rhetoric with humour, logic with wit, speech with performance, and truth with embodiment.

Antinomianism

Zeno, an initial follower of Diogenes, is credited by Kropotkin as being the ‘best exponent of anarchist philosophy in Ancient Greece.’ In Kropotkin’s words, Zeno ‘repudiated the omnipotence of the state, its intervention and regimentation, and proclaimed the sovereignty of the moral law of the individual.’

While one should be wary of claiming Cynicism as one’s own, there is no doubt that Cynicism, in its parrhesiastic embodiment of truth, tends towards subversiveness.

We can perhaps view this subversiveness more generally as analogous to Deleuze and Guattari’s nomadology, where the perpetual movement of the Cynic nomad (the universe as home, see cosmopolitical subjectivity below) collides inevitably with the state apparatus. Cynicism speaks truth to power and lives truth against convention.

For Diogenes, law and the city were considered civilized, where ‘civilized’ was most likely intended as a pejorative term. His view of social norms and civilized law amounts to an early radical antinomianism that preempts certain modern strains of critical legal theory; not only in terms of the idea of contingency, but also in terms of grounding antinomianism in something ‘other.’

However, while the ‘other’ for critical legal theorists is commonly understood in poststructuralist terms as an unknowable beyond, the Cynic other was simply nature and the authority that nature lends as a protoypical form of natural right.

This suggests there may be a constant thread in the intellectual history of subversion, one that resists always in the name of something other, some foundational or even post-foundational other, an other — whether justice, god, nature, etc. — whose complexities and paradoxes the Cynics, in their aversion to grand theory, never tied themselves up in.

Cosmopolitical Subjectivity

On being asked where he came from, Diogenes is said to have replied, ‘I am a citizen of the world [kosmopolitēs].’ Today, theories of world citizenship or cosmopolitanism are based on an idea of human unity from which moral and political commitments are drawn, typically involving the development of stronger global institutions, governance, human rights, and the rule of law.

Despite its metaphysical determination, human unity acts as powerful trope to critique parochialism and state sovereignty. Yet it is precisely because of its metaphysical determination, that its deployment in thinking ‘human’ subjectivity can also be problematic.

Given the Cynic tendency towards anti-philosophy and antinomianism, Diogenes’ cosmopolitanism was not a question of human unity and he certainly did not mean the institutional structure of the city made global. If anything, his cosmopolitanism can be minimally understood as a ‘commonwealth … as wide as the universe’ conceived in dialectical opposition to the bounded city.

It was therefore, again minimally, a way to subvert normal citizenship and the laws and mores of contingent social spaces in the name of an ‘other’ cosmopolitical subjectivity. This is not to say that when we infer the detail of what such a commonwealth might look like (property, wives, and sons held in common, as Diogenes and his epigones are reputed to have said), that this would be without its own problems.

Assessment and Conclusion

Throughout history, the insolence and shamelessness of Cynicism has tended to be ignored or derided by the mainstream. Yet those same qualities, stemming as they do from a profound sense of alterity and courage to speak out, has also spawned modern admirers, from Kropotkin to Nietzsche to Foucault.

This in itself indicates that, for those interested in radical critique — whether of law, politics, society, or culture more generally — Cynicism has something to offer. But before specifying what, it would be appropriate to mention its major sticking point or aporia.

Recall that the strength of the Cynic bite is drawn from a reliance on the authority of nature. To hold this line, the Cynic must make a decision on the nature of nature (what is the normative content of nature?) without which no lesson can be drawn.

However, such decisions are always subject to the limits of the discourses within which they are articulated. Of necessity, the Cynic’s view of nature is, like anyone else’s, a partial or incomplete view. This is a problem that plagues not just Cynicism, but natural law thinking in general. What Diogenes considers ‘natural,’ others, especially of a different time and place, do not. I have already hinted at the potential for differences of opinion on the content of a ‘universal commonwealth’ predicated upon our ‘cosmic nature.’

But to give a different example: on seeing a young man behaving in a way he considered effeminate, Diogenes is said to have rebuked him: ‘Are you not ashamed … that your own intention about yourself should be worse than nature’s: for nature made you a man, but you are forcing yourself to play the woman.’ We can only speculate what a modern Diogenes would have said when made aware of sex and gender distinctions.

Having duly recognized this significant limitation, what can Cynicism offer us today? I think it is important, firstly, not to monumentalize the Cynics, that is, to dogmatically assert their credentials as the original subversives. It is also important, given the stated problematics, not to simply imitate them, but to draw upon and reinterpret for our time the rich resource of possibility that they represent.

Foucault, in particular, already started to do this in his analysis of their parrhesia. But this is just the beginning. For the critical scholar, Cynicism can provoke myriad questions: Given the limits in thinking a norm-bearing nature, what are the possibilities of thinking a natural law whose particularized content has been evacuated – a kind of denaturalized natural law?

How could this link to a concept of truth or anti-philosophy or would it be more appropriate to think in terms of a philosophical non-philosophy (see e.g. François Laruelle)? What are the possibilities, limits, effects, and risks of using humour to go beyond critical satire and to directly intervene into political consciousness? Is there any value in pursuing Cynical askesis or some updated version of it today? How can we further think a Cynic cosmopolitanism that emphasizes dialectical opposition? And so on … .

Gilbert Leung, PhD, writes on law, critical theory and philosophy. He is the Director of Counterpress.

The photo shows, “Diogenes Looking for an Honest Man,” attributed to Johann Heinrich Wilhelm Tischbein, painted in the second-half of the 18th-century.

Plato And Ayn Rand

The theory of moral obligation, as found in Plato’s Republic and Ayn Rand’s The Ethics of Emergencies, hinges on the idea of the self and its ethical and moral concerns within society. However, the approaches and conclusions are far from similar.

When we turn to Ayn Rand, we find a great deal of stress on the individual; in fact overly so. For her, a person’s life is the standard of moral value, and consequently, in a nutshell, happiness is each person’s moral obligation. Thus, Rand posits a cognitive/moral approach.

This means that in her philosophy, a strict moral accountability is consistently at the forefront. In effect, her philosophy is centered around man, rather than on a grander cosmology. This means that primacy is given to existence itself and the necessity for survival. However, this extreme objectivism that hinges entirely upon happiness as a moral force is ultimately self-negating.

The problem with Rand is that she consistently fails to ask what is good for society – it cannot be said that what is good for the individual is therefore good for society, since all people do not act rationally in order to eliminate inequality, for example.

In fact, each person’s happiness stems from different points of view and even different economies – and if one individual wins, another loses. This sort of disparity cannot lead to a just society (a concept that Rand is extremely hazy on), because for her people who cannot rationally determine what is good for them, can still be good people.

Secondly, Rand’s objectivism is false because she believes that a self-serving point of view will give us an undeniable and universal good. Thus, for example, slavery is perfectly rational, since it serves the needs of slave-owners, who need cheap labor in order to produce goods.

Rand would have us believe that all men act rationally (that is, in their own self-interest), and therefore every concept that is based on rationality will be universally accepted. There is extreme danger in promoting self-interest as a universal concept.

Rationality must depend on society, and the norms that it accepts. However, rationality cannot be transformed into a universal standard. It is perfectly rational to a murderer that he kills people; he may even enjoy it. But is it good? Morality cannot be relativistic.

Consequently, rationalism is based on the perception of reality; it is not the logical understanding of what reality actually is. Thus, Rand’s notion of morality does not rise above self-centeredness and therefore cannot be correct.

Plato, on the other, hand provides a far more cogent and useful definition of moral obligation. For Plato, such an obligation the description, study, and observation of morality in human action and human society.

Plato also gives centrality to the idea of happiness, as does Rand; and he calls it the highest good, which he identifies with God. Thus, moral obligation for Plato is for the individual to free himself, through his actions, and use virtue and wisdom to become like God.

However, Plato does not carried away with this mystical line of thought; he does recognize and encourage the use of logic, for in his philosophy there is no place for those opinions and pleasures that cannot be freed from passion. With a view to Rand, we find that her entire philosophy is based on pleasures that cannot be freed from passion.

It is the stress on virtue that greatly elevates Plato’s philosophy, which he considers to be essential to human happiness, since it is from virtue that important social concepts arise, namely, wisdom, courage, temperance, and justice.

Further, Plato does not reduce the idea of virtue to its practical applications (something Rand is consistently guilty of). He abandons the utilitarian view and instead attaches to virtue an independent value, which lends virtue a greater worth.

Therefore, a person should strive to be virtuous, within the context of a society that likewise has virtue for its objective – because it is through this striving (both on the individual and societal levels) that morality can be established and maintained. Next, Plato defines the state as the larger man; he models it on the individual soul. This is the complete opposite of Rand’s notion of society being the place rational self-will is practiced.

Thus, Plato’s society is infinitely more moral and just than Rand’s, because there is no room for “selfishness” in it. In fact, Plato subordinates private interests to the good of the whole. In this way, he allows room for concepts such as justice and freedom, which are not merely adjuncts of someone else’s self-interest.

Therefore, we see that Rand’s philosophy is constructed entirely around the idea of rationality, and for her morality is only a choice (implying that there are other choices).

This equivalence of rationality with morality is false, since rationality is universal. Plato, far more cogently tells us that morality hinges upon justice, wisdom, courage, and moderation, which can only function within society. In short, Plato is correct because he goes beyond self-interest in order to define morality, which he tells us the good of the whole rather than the individual.

 

The photo shows, “Virgil Reading the Aeneid to Augustus and Octavia,” By Angelica Kauffmann, painted, ca. 1741-1807.

Kobe Abe: A Perspective

The novel, The Woman in the Dunes, by Kobo Abe, is a work that exists on various levels. The most immediate one is the mythic structure that becomes an integral part of the novel. In fact, the entire premise of the novel depends on the Classical myth of Sisyphus, and the journey into Hades; however, Abe is not content with merely retelling ancient myth. Rather, he takes this myth and transforms it into a viable parable for modern life.

The entomologist, Jumpei Niki, finds himself in a strange village, while looking for a rare beetle. Thus, from the very beginning, we have a journey, in which the hero seeks to find something unique and rare – very much like Jason seeking the Golden Fleece. Before long, Jumpei finds himself relegated to the pit, by the villagers, where he must remain, with only a woman as his companion, whose job it is to perpetually shovel the sand to keep the village and herself from being buried alive.

This incessant struggle to keep the ever-encroaching sand at bay certainly reflects the myth of Sisyphus, who must labor and toil to roll a huge rock up a mountain, only to have it roll back again. However, Abe transforms this myth intrinsically.

Whereas the work of Sisyphus is pointless and meaningless, what Jumpei and the woman do is extremely meaningful and useful – they are keeping themselves and the village alive. Thus, their job is similar to life itself, which finds meaning in the most mundane of existences: “In the final analysis, I rather think the world is like sand.

The fundamental nature of sand is very difficult to grasp when you think of it in its stationary state. Sand not only flows, but this very flow is sand.” Therefore, sand is the force of nature that will ultimately destroy the individual.

Thus, despite the absurdity of life – represented by the mindless task of shoveling sand day and night – there is also meaning in what we do. In effect, there is the important idea that we must create meaning in life, despite the fact that we may find ourselves trapped in a situation that in and of itself is entirely devoid of meaning.

From a larger perspective, the sand represents not only the force of nature that will destroy us all in the end; it also symbolizes the encroaching of a world that is alien to Japanese culture. The sand is the influence from abroad that must be continually thwarted in order to preserve that which is inherently Japanese. Again, Jumpei’s and the woman’s job is not entirely without meaning – for they are both guardians who must preserve not only their village, but also by extension, Japan.

It is this larger perspective that transforms the novel into a grand parable about the usefulness of life. A parable by its very nature must teach a valuable lesson, by way of a narrative. Thus, the tale of Jumpei and the woman, trapped in a pit, forever shoveling sand, is a parable for human existence itself.

We human beings are also trapped in life, in this world. Often we recognize our lives to be meaningless and absurd. But it is out effort and our will that transform our lives into an existence full of meaning. And this transformation is the result of our labor – just as Jumpei and the woman must labor daily and ceaselessly in order to live.

Thus, Kobo Abe’s The Woman in the Dunes is work that transforms myth, and in doing so becomes a parable about the meaningfulness of life – and how we achieve this meaning through our own will and effort. It is ordinary life that is heroic, in the end, and what we achieve transforms not only the world outside us, but it also transforms us as well.

 

The photo shows, “A Woman Reading” on a postcard, by Yumeji Takehisa, Showa Era, ca. 1930s.

In Response To Psalm 50

Psalm 50:3: “Our God shall come, and shall not keep silence: a fire shall devour before him, and it shall be very tempestuous round about him.”

 

September

Seven embers of thought enflamed,
a mind on fire, in a wooden frame.
That single wick of a candle, broken
into a shaft of smoke, crying
molten words unspoken.

The Ether

The ether,
suspended above the clouds
like a sunset whittled down
to its final shavings,
sucked up by the moon
into the vacuum of high noon,
where the echoes go on raving…
Raving…
Raving.

 

Airships

Blowing leaves,
motion without sound,
airships breaking heaven
where only silence is unbound.

 

Cosmin Dzurdzsa is the senior editor of The Post Millennial
The photo shows, “King David Playing the Harp,” by Gerard van Honthorst, painted in 1622.

The Very Idea Of Technology

Whenever people are trying to define the modern age, there’s an inevitable phrase that gets tossed around. We hear it all the time – “We are an age of technology.”

And when people are asked what this phrase means, they invariably generate a list – cars, televisions, space probes, computers, the microchip – all things that were mostly science fiction just a hundred years ago. How did we come so far, so quickly?

But are we technological because we have more gadgets than, say, the ancient Egyptians who, after all, did build the pyramids? But our culture is different from the ancient Egyptians. How so?

Our age is technological not because of gadgets, but because of the idea of technology. The gadgets are a mere by-product. The way we think is profoundly different from all previous human civilizations.

We perceive things in a systematic way. We like to build conceptual structures. We like to investigate and get at the root causes of things. We like to figure out how things work. We see nature, the earth, the universe, as a series of intersecting systems. And this difference is the result of technology.

Essentially, we are dealing with two Greek words: techne and logia. Techne means “art,” “craft,” or “handiwork.” But logia is more interesting. It means “account,” “word,” “description,” and even “story.”

It is the root of other important words in English, such as “logistics” and “logical.” And it even reaches into the spiritual realm, where “Logos” is intimately connected with the mystery of God in Christianity, where God (Logos) is made flesh in Jesus Christ.

Therefore, technology is not really about gadgets. The word actually means “a description of art,” or “a story of craft, handiwork.” Anything we create is technology. Be it the microchip, a film, a novel, an airplane, or a poem.

But this is only the first layer. We need to dig further. Why do we use a Greek word in the first place? This question lets us dig right down to the foundations.

The word is Greek because the idea is Greek. This is not to say that other cultures did not have technology; they certainly did; the Pyramids are certain proof of that, as are the Nascan lines in the desert.

However, we have already established that technology is not about gadgets, or objects that we create. It is a particular mind-set.

Technology is visualizing the result, or perhaps uncovering that which lies hidden within our imagination. It really is still about giving an account of art, about what we can do with our minds.

But how is all this Greek?

The idea of technology was given to us by one specific person – the Greek philosopher, Aristotle(384-322 BC).

At the age of twenty, Aristotle found himself in Athens, listening to the already famous Plato (428 B.C. to 348 B.C.).

But the pupil would become greater than the master. Interestingly enough, Aristotle too had a famous pupil – Alexander the Great. Aristotle certainly had the ability to transform the way people thought – down to the present.

It was Aristotle who stressed the need not only for science, but a conceptual understanding of science. It was not enough just to be able to do things, such as craftsmanship that was passed down from father-to-son in his own day, and in many parts of the world today.

It was important to understand how things were; how they functioned the way they did.

It was Aristotle who taught us to break down an object into its smallest part so we can understand how it is built and how it operates. Where would science be today without this insight – which we now take as common sense.

But before Aristotle, it was not common sense. The common sense before his time was to accept things the way they were, because the gods had made them that way, and who were we to question the will of the gods. This was the pre-technological mindset.

Aristotle, like Plato before him, taught that nature and human beings behave according to systems that can be recorded and then classified, and understood and then applied. These categories provided mental frameworks within which we could house our ideas.

Therefore, if nature is a system (and not mysterious and unknowable), then it can be understood. And if it can be understood, it can be controlled. And if it can be controlled, then we can avoid being its victims.

Our ability to classify, categorize, and explain – in short, our technology – is the invention of Aristotle. Before he came along, we were only groping in the dark – if we dared grope, that is.

 

The photo shows, “Cyclist Through the City” (“Ciclista attraverso la città”), by Fortunato Depero, painted in 1945.

The Humanities And Language

It is often assumed that the discipline of the Humanities involves anything and everything that cannot be properly be classified as a proper science. It is also commonly assumed that language is simply a method of communication – so that flapping one’s arms is the same as speaking; or, one may draw a picture, since a picture is worth a thousand words, as he adage tells us.

Before proceeding further, perhaps its best to define our terms so that we do not bogged down with assumptions.

Turning to language, we need to understand it as thinking more than communication. The founder of linguistic philosophy (Wilhelm von Humboldt) tells us that language is the expression of thinking peculiar to a people, even the most primitive of people, those closest to nature, as he puts it. Communication is only the simplest, basic level of linguistic use.

The most intensive use is the generation of ideas. The philologist Max Mueller continues Humboldt’s description when he describes language as “the outward form and manifestation of thought.”

And Humboldt further defines language as the medium through which humanity encounters reality – “Man lives with his objects chiefly as language presents them to him.”

The philosopher, Ernst Cassirer, succinctly described language as first the symbolic rendering of expressions and second the engendering of discursive thought; or, in other words, reason.

Thus language is the principle which serves to link together complexity in order to produce meaning, or what may be called abstract thought. In brief, for Cassirer, language is the entelechy of knowledge.

This obviously means that language has more than a denotative function – it is more than simply communication.

To quote the Danish linguist Louis Hjelmslev: “A language is that into which all other languages, and even all other conceivable language structures may be translated. In language, indeed only in such, can the inexpressible  be dealt with until such time as it is expressed.” Language, first and foremost is ideas.

Given the intimate association of language with thinking and knowledge – why do we hear the teachers of language referring to it as a “form of communication?” What purpose does this extreme simplification fulfill?

Having briefly defined language, we may do the same for the humanities. Again, we encounter confusion. The tendency nowadays is to view the Humanities as anything that is not science; and this confusion continues into areas which veer into science (like anthropology, psychology and sociology).

So, what are the Humanities? In a very straightforward way the Humanities have always meant the study of Greek and Latin – that is, the discipline of the Humanities has always been tied with the learning of language – because it was (and one hopes still is) believed that by learning a language, in a disciplined and structured fashion, a person became educated and refined.

Thus the Humanities are based upon the understanding that education is only possible through language. Therefore, the Humanities are not anything not science – but very specifically education in language – and those disciplines that promote language – namely literature, philosophy, biography and history. And it is here also that we have the very history of education.

But we now speak of skill, rather than education, and language is simply another tool to further the demands of the labor marker, rather than the promotion of being a good human being – the traditional goal of education. Skill is not about education – it is about labor and production.

Education is about building the good human being – or about the esthetic, moral and intellectual nature of humanity. Skill is about the material environment and its conquest. Skill is about bondage (the demands of labor). Education is about understanding the exercise of freedom.

And then there are countless falsehoods that permeate teaching institutions. The worst among them is the notion of “learning styles,” and the absurd notion of “right-brain” and “left-brain” learners. Study after study has amply demonstrated that there is no such thing as “visual learning” or “auditory learning,” or kinesthetic.

Nor does the brain function in left and right compartments. And yet, these false notions are so popular in educational institutions – and worst of all, entire pedagogies are built around these falsehoods. Why? As researchers recently observed in an extensive in the Journal of Psychological Science, “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing.”

Disturbing because students are being taught based upon false assumptions. Is an educational institution a place where pop-psychology should be followed?

And yet the popularity of these views in pedagogy is enormous. And the literature is enormous. But it is literature produced by the non-specialist – by the amateur. Why do teachers follow these falsehoods?

And recent studies also tell us that the only way possible for the brain to learn anything is through language. Thus, the physical brain is Humanistic. It is built primarily for language, for thought, for ideas. And the world that we live, the labor that do, is a function of thought, of ideas. The world that we inhabit is the product of Humanism.

Thus to neglect confuse Humanism with anything other than language is to deny the importance of thought.

 

The photo shows, “Le quai aux fleurs,” by Marie-François Firmin-Girard, painted in 1875.

Ways Of Persuasion

Aristotle conceived of three major types of rhetorical appeal. These modes of rhetoric, otherwise known as “proofs,” were the pillars of persuasive dialogue back in ancient Greece, and their foundation still holds to this day.

All rhetorical arguments can more or less be categorized under (or prescribed as a blend of) either: Pathos, Ethos, or Logos. Regardless of whether or not an argument is conveyed through speech or writing, these three proofs are undoubtedly the most useful means a speaker has in convincing their target audience of their desired point or belief.

In other words, if rhetoric is a battle of persuasion, these are the most powerful tools in the rhetor’s arsenal. However—there is a fourth proof that is oftentimes overlooked. While it can be assumed that the majority of readers are more than likely familiar with Aristotle’s three rhetorical proofs, very few people in my experience (besides rhetorical scholars like myself) have come across the elusive fourth proof: Kairos.

My intention in the remainder of this article is to refresh readers of what the three original rhetorical proofs are, and to enlighten many of you to the fourth (and my personal favorite) rhetorical proof.

 

Rhetoric

The art of persuasion, and how to effectively impact people through communication. Seeing as Rhetoric is the cornerstone of this whole piece, I figured I would provide my short definition that has helped guide me in my studies.

 

Pathos

Aristotle categorized Pathos as an appeal to emotion. Essentially, Pathos refers to anytime a rhetor attempts to tug at their audience’s heart-strings, so to speak. When a speaker brings up a tragic story from their past, references morality or the distinction between “right and wrong”, or even if they were to cry at the podium: this is all Pathos.

However, there are other forms of emotional appeal besides sadness or guilt. A speaker who invigorates their audience to join a cause is engaging in pathos. A comedian who makes their audience laugh is engaging in pathos. The politician that tries to make their community angry or scared enough that they’ll follow them is engaging in pathos. Pathos holds dominion over any and all emotion.

 

Ethos

This appeal is the easiest to break down in my opinion. Ethos simply refers to a speaker’s credibility. An argument that is founded upon one’s own trustworthiness or experience is grounded in ethos.

For instance, a doctor that tries to convince me my eating habits are unhealthy is engaging in ethos, as her word is rooted in knowledge and practicality. When you were learning to drive, and your father told you “listen… I’ve been driving for 40 years, I know what I’m doing so let me teach you…”—that was ethos.

 

Logos

Logos is the Greek word for form, meaning, and structure. Logic, in other words. Logos refers to the strength of a rhetor’s argument simply based upon its logical resiliency. If a claim is so air tight on a logical standpoint that it cannot be truthfully rebuked, it is an impervious argument in regards to logos.

When you’re going through a breakup, and your significant other breaks down all of the reasons why your partnership will no longer work—that’s logos. Any logical strong of thoughts with the intention to persuade others falls under this category.

 

Kairos

Finally, we reach the least talked about of all persuasive proofs, Kairos. The term refers to the art of timeliness, and the actual timing of an argument.

he technique of this proof is to actually choose when your line of communication will be delivered, as a means of strengthening your argument. A man about to propose to his girlfriend waits until the sun has set and the mood is completely right before popping the question—this is an act of kairos.

Waiting until the opportune moment can secure that your point will be well received, and is thus a hugely effective move by any rhetor. While this proof is often overlooked, its rhetorical potency shouldn’t be understated.

 

The photo shows, “The Irritating Gentleman,” by Berthold Wolyze, painted in the 1874.

Animals And Humans In King Lear

Scattered throughout Shakespeare’s King Lear are references to animals. These references serve as points of comparison, and affinity, with the human animal. The purpose of these references is to highlight human existence on the appetitive level – that which solely feeds and nurtures the body, without concern for concepts that drive human society, such as ethics and morality.

In fact, it is for this very reason that Lear is turned out into the wild heath, very much like a feral beast, wherein he can enact his tragedy, free from all associations with the constructs of civilization.

In effect, the animal references in King Lear emphasize humankind’s affinity with all living things, in that each of us is involved in a cycle – birth, begetting offspring, death – life outside civilization, life as the instinctual drive to breed and survive.

As well, it is important to realize that human society is also a construct of superfluity in that human beings tend to accumulate wealth and power, without thinking about why they need to carry on in this way.

This is precisely the painful lesson that Lear learns on the heath. He has been turned out into the storm like some mad, unwanted animal. He, the king, is powerless before nature. All his wealth, all his influence, even his fifty companions that he kept with him at all times as a show of his might – are all stripped away. On the heath, he is no more than a lost, old man whom no one wants.

Interestingly enough, Lear the king, living in his court, was more appetitive, more driven by his own sense of power (since he could make or break the lives of his daughter, especially Cordelia) – more like an animal – than the human being that he becomes on the heath.

It is by suffering like a wretched animal, by being cast to the very lowest level of subsistence, that Lear learns about truth of a human life, indeed the value of a human life.

It by suffering that he undergoes purification, where all superfluity is stripped from him, and he becomes a man that finally understands the value of love and compassion. And the animals teach this lesson to him:

Poor naked wretches, wheresoe’er you are,
That bide the pelting of this pitiless storm,
How shall your houseless heads and unfed sides,
Your loop’d and window’d raggedness, defend you
From seasons such as these? O, I have ta’en
Too little care of this! Take physic, pomp,
Expose thyself to feel what wretches feel,
That thou mayst shake the superflux to them,
And show the heavens more just (III.iv.28-36).

Despite the darkness that pervades the entire play, King Lear is about the discovery of love. All too often a lifetime will go by before we understand the reality of love.

In fact, the entire play is structured around the idea of inversion – things that we assume are normal and therefore proper (such as Lear the King parceling out his kingdom to the daughter who loves him the most) – are twisted and inherently wrong, if not evil.

By his own action, by trying to see which daughter loves him the most, Lear unleashes the tragedy that shall consume in the end. Lear the “wise, old king” is in fact a foolish old man – for he actually believes he can discern true love by initiating a game – “Let’s play who loves Dad the most.”

But Cordelia refuses to play. She knows that true love is not contained in mere words, but is in fact found in actions and deeds – something Lear himself bitterly learns:

No, no, no, no! Come let’s away to prison:
We two alone will sing like birds i’ the cage;
When thou dost ask me blessing, I’ll kneel down
And ask of thee forgiveness. So we’ll live,
And pray, and sing, and tell old tales, and laugh
At gilded butterflies, and hear poor rogues
Talk of court news; and we’ll talk with them too –
Who loses and who wins; who’s in, who’s out (V.iii.8-15).

Birds in a cage are freer than kings at court. They are completely without guile and deception. The inversion continues, for the cage is the freest place for Lear; it is there he finds truth, and it is there that he finds true love that Cordelia bears for him.

Of course, it is in the nature of Shakespearean tragedy that death comes precisely – and only – when complete realization is achieved and truth laid bare.

Thus, when Lear finds Cordelia, it is too late. Death takes away the very person that Lear sought throughout the play – someone who would love him without hope for reward.

And it is at this very juncture that we have the strongest evocation of the parallel between human existence and animals – for as living creatures we share the same fate – some will die soon, others a little later, but human and animals – indeed all life – is bound to the cycle of life and death:

Why should a dog, a horse, a rat, have life,
And thou no breath at all? Thou’lt come no more,
Never, never, never, never, never (V.iii.307-309).
The finality of “Never” rings like a knell upon all human hopes to be greater and higher than what we really are – human animals.

It is this question that Kent asks as he sees Lear carry in the dead Ophelia: “Is this the promis’d end” (V.iii.265).

When the play ends, we must answer Kent and say, “Yes. This is the promised end – for death makes animals of us all.” And it is to Kent that we must leave the final word: “Break, heart, I prithee, break”(V.iii.314).

 

The photo shows, “King Lear, Act I, Scene I (Cordelia’s Farewell),” by Edwin Austin Abbey, painted in 1898.

Thomas More In His Utopia

Thomas More’s Utopia is a work that is a complex critique of sixteenth-century northern European society. This critique is accomplished by way of postulating various ideal conditions that exist on an imaginary island called Utopia, and then these conditions are contrasted with the conditions prevalent in the Europe of More’s day.

One of these ideal concepts that Utopia gives us is the description of how perfection has been achieved, namely, through the eradication of pride – the root of all evil in humankind.

By the beginning of the sixteenth century, the Renaissance was coming into its own in France, Germany, the Netherlands, and England (although it was waning in Italy), by way of humanist thinkers.

These northern humanists are sometimes called, “Christian humanists” in that they believed that it was a human being’s privilege to seek happiness in this life, and that this true happiness was based on reason; however this happiness was only truly attained by divine grace.

The northern Renaissance particularly focused on a program of practical reform in a wide range of areas, including religion, education, and government. But there was an inherent tension in this position, since often these humanist reformers were also members of the political establishment – in brief, most were courtiers.

The key ideology of the Renaissance was a conscious turning away from scholasticism and the espousal of particular models. But this turn to the Classics was not a rejection of Christianity; rather it was an attempt to find material with which to reinterpret the essential message of Christianity – the destruction of pride that leads to estrangement of man from God and man from man.

In fact, for the Christian humanists, pride was the root of all evil; it was the grand paradigm wherein the Fall of Man and his salvation could be explained.

Thus rhetoric (the study of communication and persuasion) was associated with eloquence – and to a humanist, eloquence presupposed a nobility in the communication of one’s ideas as well as wisdom, as eloquence was the outward sign of inner wisdom. Beauty was derived from the Classics and wisdom acquired from Christianity.

Therefore, for the humanists, reason was innate in man’s soul, and through reason man could free himself from the grosser bonds of pride and become a creature not far below God himself.

Of course, the program of reform was greatly enhanced by the availability of the printing press. Thus, Desiderius Erasmus wrote continually for the printing press, and the humanists were generally able to promulgate their ideas (and propaganda) more widely than had been previously possible. They also utilized Latin, which served as an international language of Europe. It is within this context of Renaissance humanism that More’s Utopia needs to be read.

The important theme within this context is the use of pride both as an example of what is to be avoided in order to arrive at the perfected state, and as a tool to critique the idea of society itself, which is built upon the largely evil manifestations of pride. More attempts to put his humanist vision within the parameters of practical application, by way of social critique.

In Utopia three characters converse: Thomas More appears as a fictionalized version of himself; Raphael Hythlodaeus is the fictional traveler to exotic worlds; and Peter Giles, More’s young friend from Antwerp, throws in an occasional word or two. The premise of the work seeks to dispense with the entire order based on private property, which is an extension of greed and rooted in civic pride.

More also takes the liberty to suppose a commonwealth based on the pessimism that there is a real need for secular government, which keeps fallen mankind from hurtling into the vortex of perpetual violence.

Of course, the prime source of violence among mankind is pride: sinful human beings have an insatiable desire for things, and this desire translates into pride when those that have more look down upon those who have less, social pride.

Thus we have in Utopia a play on how life might develop in a state that tries to balance human depravity of pride and a communal system that aims to check the destructive individualism of corrupt human nature.

Raphael entertains us by bringing our experience in the ordinary world up against an ideal that we cannot really reach, but one that has about it a certain plausibility. Utopia is a mirror held up to nature, and we see ourselves reflected in it.

The key question that Utopia asks concerns the relationship between our possessions and our souls. Are the conspicuous illusions of wealth (pride) a type of injustice? They are, according to Utopia: “In fact, when I consider any social system that prevails in the modern world, I can’t, so help me God, see it as anything but a conspiracy of the rich to advance their own interests under the pretext of organizing society.”

If pride is measured by a sterile metal like gold, are the people who wear chains of gold not prisoners of their pride? And is it possible, in a zero sum world, where one person’s gain is another person’s loss, that the people who sport such finery are not in fact beggaring others? Thus the root of man’s injustice to man is pride, a conspiracy of those who seek to further their own egos.

If we measure worth by possession, are we not driven by a peculiar and implacable logic to put people to death for theft? More’s work raises this very fundamental question in regard to pride: what is it about possession that distorts vision and makes one person feel better than another?

The six-hour working day in Utopia also represents a perpetual check on an acquisitive society to turn human beings into beasts of burden to be worked as if they had no claim over themselves. For life is an end in and of itself, and not merely an instrument to be used for someone else’s gain.

Without pride, the force of such an imperative to use other people’s lives for personal gain is completely blunted. Thus for More, the root of human depravity is pride, and by eliminating private property, the root of civic and social pride is vanquished.

However, it is important to keep in mind that Utopia, from the beginning is an artificial construct. Some 1760 years earlier, Utopus had dug a channel to separate Utopia from the corrupting lands nearby. As the wise lawgiver, he imposed laws on people who could not or would not create those laws themselves.

But Utopia is afloat in world that is not Utopia: the fear of contamination is very much prevalent. Thus even if civic and social pride within is eliminated, it can still come from without.

This is why the Utopians give great weight to military matters, for a virtuous nation unarmed is quickly swallowed by the voraciousness of the outsider. Thus, there are massive walls around their towns on their island.

Since pride of possession has been vanquished, no locks bar Utopian doors, which open at a touch. The only reason Utopians can imagine the need for privacy is if they had pride: to guard what other do not have. Therefore, conformity is the rule of every house: “When you’ve seen one of them, you’ve seen them all.”

Raphael believes societies other than Utopia are merely conspiracies of the rich. These societies are realms of greed and pride. And pride causes men to measure their welfare not by their well-being, but by having things that other lack, which is irrational and unchristian. Only in Utopia has pride and all its attendant vices been eviscerated from society.

It is because of this evisceration that Utopian polity rests upon common ownership. Through this idea, More could have it both ways: he could explore the implications of a communal way of living without necessarily proposing it, however much he may have felt emotionally or intellectually inclined towards it.

Raphael’s summation of the general advantage of the Utopian way of life betrays the reason for its attractiveness: although no man owns anything, all are rich – “for what can be richer than to live with a happy and tranquil mind, free from anxiety?”

In effect, the Utopians’ repudiation of private property is a remedy that frees them from pride and allows them to live a life that is at once religious and secular, private and public.

Consequently, their world consists of: equality of all things among citizens; love of peace and quiet; and contempt for gold and silver. In short, they have imported the ideals of the monastic life into political and social affairs.

A large part of Book 2, then, describes the happy place freed from the vices of the real world. But here we see that pride is also used to critique the Europe of More’s day. As happy as Utopia is, it is also “No place,” a land that will never be.

At one level, particularly with respect to geography, England and Utopia share a shadowy identity. Utopia is an island separated from the continent by a channel (Amaurotum), its capital city, together with the tidal river Anydrus, and the magnificently arched stone bridge across it, resemble London and the Thames, and the houses reflect those in England.

Thus it is not long before the Utopian illusion dissolves into the reality of England and Europe – places where pride certainly holds sway, and governs all aspects of civil, private, political, and social life.

The importance of pride comes through strongly in Raphael’s description of the Utopians distrust of treaties. In fact, the Utopians never make treaties with any nation, because “in those parts of the world treaties and alliances between kings are not observed with much good faith.”

He then draws a satiric contrast with Europe, meaning the exact opposite of what he says: “In Europe, however, and especially in those parts where the faith and religion of Christ prevails, the majesty of treaties is everywhere holy and inviolable, partly through the justice and goodness of kings, partly through the reverence and fear of the Sovereign Pontiffs.”

Of course, the reality in Europe is otherwise: pride makes all treaties cheap. Thus Utopia gradually describes the polity that an optimistic humanist might envision for England in the context of the contemporary historical Renaissance, through the eradication of pride.

However, the perfected state of Utopia is not without its contradictions, and these contradictions arise from the paradox that lies at the very heart of the book: that rational action can give rise to unreasonable consequences; the Utopians most determined efforts to fulfill the most laudable of intentions often meet with failure.

The most striking example of this is the war they fight on behalf of the Nephelogetes against the Alaopolitans – the Utopians are being good neighbors. Thus the Utopians went to the assistance of the Nephelogetes, who claimed that they had suffered injustice at the hands of the Alaopolitans under the pretext of law.

The outcome was catastrophic: “…whether right or wrong, it was avenged by a fierce war. Into this war the neighboring nations brought their energies and resources to assist the power and to intensify the rancor of both sides.

Most flourishing nations were either shaken to their foundations or grievously afflicted. The troubles upon troubles that arose were ended only by the enslavement and surrender of the Alaopolitans. Since the Utopians were not fighting in their own interest, they yielded them into the power of the Nephelogetes, a people who, when the Alaopolitans were prosperous, were not in the least comparable to them.”

Thus, what people experience is often very different from anything they intend, desire, seek, or foresee. Does the eradication of pride really lead to freedom from all evil?

How is Utopian society kept from reverting to pride? Again, we see many paradoxes. For example, the suffocating constraints on individual liberty required to effectuate the Utopians’ attempt to secure more liberty and leisure for all, or the moral injustice of the rational justice by which they regulate numbers in their families and colonies.

The cost of eradicating pride is the deprivation of some portion of an individual’s will, however rationally or virtually that person might act. Utopia thus contains an inbuilt ambiguity; it represents to a large extent what More wished for, even while he saw that if it could be, which it never could, the human condition would remain essentially unchanged in its character and function.

This point brings us to examine religious pride in Utopia. The essential feature of Utopian religion is that it is not definitive, and it resides in the responsive condition of mind rather than an elaborate and arbitrary dogma.

Its main precepts were instituted by Utopus, who allowed for a range of beliefs and provided for the possibility of wise doubting: “On religion he did not venture rashly to dogmatize. He was uncertain whether God did not desire a varied and manifold worship and therefore did not inspire different people with different views.”

The Utopians must, however, accept two fundamental tenets: that the world is governed by providence, not chance, and that the soul is immortal and will receive rewards and punishments after this life. To believe otherwise is to fall from the dignity of human life.

In practice, they let their faith instruct their reason, so that they are capable of modifying the rational rigor of their epicurean philosophy to allow for the justified existence of their ascetic religious order as well as those who wish to enjoy honest pleasures in marriage.

Thus, for the Utopians, religion is not a source of pride: they cannot say that their belief is better, truer, more righteous than any other belief – a position impossible in the Europe of the day, where to doubt the basic tenets of Christian amounted to heresy.

This point is highlighted if we consider that the Utopians profess a willingness to contemplate the possibility that all their assumptions about God and religion may be false: “If he [a Utopian] errs in these matters or if there is anything better and more approved by God than that commonwealth or that religion, he prays that He will, of His goodness, bring him to the knowledge of it, for he is ready to follow in whatever path He may lead him. But if this form of a commonwealth be the best and his religion the truest, he prays that then He may give him steadfastness and bring all other mortals to the same way of living and the same opinion of God – unless there be something in this variety of religions which delights His inscrutable will.”

Thus we see that the Utopians’ prayers manifest immediate faith and hope, while acknowledging doubt about the verity of faith itself. It is this doubt, therefore, that eradicates pride, since one faith system is no truer than another.

Of course, just a year after Utopia was written, Martin Luther nailed his ninety-five theses to the door of Wittenberg Church and began the Reformation, which would see Europe being plunged into blood, and would cause the death of Thomas More himself. European reality and Utopian idealism stand at opposite ends of what could be and what really is.

 

The photo shows, “The Family of Sir Thomas More,” by Rowland Lockey, painted 1592.

Roman Dictatorship: Some Observations

The relevance of the relationship between the powers granted by states of emergency and the transition to authoritarianism and dictatorship is perhaps self-evident to any student of political science aware of history and contemporary events.

We will define authoritarianism broadly as a regime maintaining obedience through the use of the fear of coercion, and thus the foundation of the state is not legal authority but rather power exercised through an informal dictate. Dictatorship will be defined as the exercise of a like dictate, except under a public, systematic and formalised power rather than a hidden one.

Both types of regimes are characterised by elites of converging interests as well as the use of crisis in order to justify the power of the regime, and the loss of individual liberties. As such, these states have been characterised as crisis states which function in a nigh perpetual state of emergency despite professing outward belief in rights and the will of the people.

Thus, the struggle to preserve the sanctity of sovereign laws over arbitrary mandates of power depends precisely on the separation between normal legal and political procedures and those of the state of emergency.

It is precisely this struggle and conundrum over the distinction between the state of emergency and the normal rule of law which is made evident by the historical case of the Roman Republic and the transformation that occurred in its state of emergency procedures.

This will allow us to demonstrate such transformation as exemplified by the early institution of Dictatorship and the Senatus Consultum Ultimum, what they demonstrated of Roman politics and law in terms of the blurring between the previously strict lines of the state of emergency and normal legal procedure, and taking special note of the incident of the Catiline Conspiracy.

 

Roman Dictatorship

Arguably the original and thus most famous state of emergency in western political history, Roman Dictatorship presents the first ever attempt at addressing what in political philosophy ever since Aristotelian thought has been the problem of equity; equal applications of the law in all circumstances may be inherently unjust or inadequate, hence calling for a temporary suspension or alteration of laws and legal procedure

In principle, the dictatorship in Republican Rome was magistral office like no other exercising for a limited period of six months the power of imperium or in other words supreme administrative and coercive power in order to immediately deal with an external state of crisis, though it was restricted both legally and through political and religious precedent.

Nevertheless, the description alone cannot do the office justice unless it is situated in its proper historical, legal and functional contexts before one can even approach it ultimate political dimension.

The first paradox brought about by trying to put Roman Dictatorship in a proper historical context is how it could have even arisen in Rome in the first place. It is no great mystery that the city of Rome ever since the foundation of the Republic in 509 BC was weary of monarchy, which it had violently deposed, so much so that word king itself was an insult and a much despised word.

Yet, in the office of the dictator one seemingly finds the closest conjuring to monarchy that could be conceived in a republican government, as the dictator wielded the war powers of the monarchy, superseding those of the consuls and initially having command over all other magistrates.

If one takes Livy’s Histories at face value, the mere purpose of the office as a means of averting crisis and fulfilling a function which could not be carried out by the elected magistrates provides proper justification for the installment of the office in 500 BC, less than ten years since the institution of the Republic in the first place.

The dates themselves cause an issue, as it would have been a precarious action indeed for the people of Rome to have reinstated virtually in every way but name the monarchy they so desperately deposed less than ten years before.

In ​The Origin of Roman Dictatorship, D. Cohen seeks to explain and rationalize the origins of this extraordinary office both in terms of an interregnum, positing that it served as a transition from monarchy to republican government, but also a religious function in the early Republic which required the highest authority. Similar offices were to be found in the other city states of Latium, such as Alba and Caere, though with year-long terms of office.

In particular, the act of religious purification carried out by dictators and last performed in ​363 BC in the driving of a nail ceremony (a religious rite likely of early Indo-European origins) as a response to a pestilence carried out by Manlius Imperiosus demonstrated the nigh-sanctity of the office, above that of the Pontifex Maximus (head priest).

The sanctity of the office is further exemplified by the custom of silence which other magistrates were meant to obey before the dictator, and in conjunction with the dictator’s role as a saviour figure, one can understand how the Roman people accepted the existence of such an office in the early years of the Republic.

Beyond this period, and especially following the course of the Punic Wars in the Middle Republic, the office of the dictator lost further and further independence vis a vis the Senate and its authority to overreach the imperium of the Consuls, and eventually tribunes could veto the dictator’s measures just as those of the consuls.

Having approximated the origin of Roman Dictatorship, situating it in terms of Roman constitutional law is necessary. The procedure appointing the dictator appears at first sight as a simple consultation, whereby the Senate would agree that a state of emergency existed give leave of the consuls to appoint a dictator in order to deal with it.

The dictator’s powers were thus to deal with the specific state of emergency at hand; whether it was an insurrection or an invasion, yet the dictator was also charged with recruiting, assembling and leading the army to deal with the threat at hand. At this point, the legal aspect of the appointment is significant, because after the consuls had chosen a candidate for dictator, the ​Lex curiata de imperio granting the dictator his power of imperium had to be passed by the assembly of the people of Rome.

The dictator’s legal mandate was a popular one, and not senatorial. Indeed, the connection of the office of dictator to the people was also represented in its original title ‘magister populi,’ which translated to magistrate of the people, or more practically of the infantry in war, whereas the second in command of the dictator was the ‘magister equites’ or master of cavalry.

Leaving these military vestiges aside, it must be noted that after the carrying out the task demanded of him, the dictator was supposed to relinquish power and after the ​Lex Repetundarum of 300 BC they could be put on trial directly if they had overstepped their mandate in purpose or time.

What this kind of legal precision shows is the ability of Roman law to adapt to states of emergency in that it is able to preserve the rule of law even when limits to power are temporarily suspended. For if one is to believe in the rule of law, a principle by which the laws are universally applicable, public and their power vested in the state and not the individuals; one must precisely have such limitations and controls over states of emergency.

In addition to formal and legal checks on the powers of the Roman Dictatorship, Naomi Lazar points out in her essay ​Making Emergencies Safe for Democracy: The Roman Dictatorship and the Rule of Law in the Study of Crisis Governmentthat informal controls over the dictatorship were just as important; Rome’s strong republican political culture, the choice of appointees (men with a long and spotless record), and the Senate’s control over the treasury.

Nevertheless, Lazar also points out that 7 of more than 90 dictators passed legislation; the ​Lex Ameliana in 434 BC which enforced term limits on Censors, and the right to hold the consulship for plebeians in 367 BC by Camillus (5 times dictator and named second founder of Rome); showing that dictators favoured reformism while in office.

Roman Dictatorship, then, as a state of emergency shows a surprising level of continuity and formality on the part of Roman law and politics to adequately deal with states of emergency; which is why all appointed dictators relinquished power after the crisis was averted, whether they were motivated by religious and political precedent, legal restrictions or just the belief in the SPQR.

As such, the self-appointment as dictators at the head of private armies by both Sulla and Caesar in the late Republic demonstrate not the lapse of Roman dictatorship into a authoritarianism due to states of emergency; but rather the abuse of that old title to legitimize the illegal seizure of the state. One can thus conclusively say that it is not in the emergency powers of the Roman Dictatorship that the pitfall of autocracy can be identified.

 

The Senatus Consultum Ultimum & Catiline Conspiracy

However, the office of dictatorship was not the only response to states of emergency which the Roman Republic employed, as there was another in the form of the Senatus Consultum Ultimum, which was not enshrined in Roman law, with a loose definition of its limits at best.

Before an explanation of the emergency procedure can be made, one must take a step back and look at the institutional history of the Senate. Founded in the monarchy, the Senate was originally an advisory body to kings until it gained greater powers and independence to make laws after the deposition of the monarchy in 509 BC.

Membership into the senate camy by consular appointment of ex magistrates, until 318 BC when the plebiscitum Ovinium took the power away from consuls and gave it to the office of the censor under the condition that once elected, a magistrate was immediately a member of the Senate.

Thus, the it became less exclusive and at the same time gained some elective legitimacy, yet its members enjoyed a mandate for life; an electoral mandate for life which in of itself is characteristic of oligarchical and elitist governments.

In addition to passing laws, the Senate enjoyed control over the treasury and state finances, as well as the ability to give consultations to the consuls which were not binding but by long precedent were obeyed.

It must be noted that the bureaucratic power of the Senate grew after the end of the Punic Wars with the beginning of the Late Republic period in order to keep up with the overseas territories that had been gained in Iberia, Sicily and the west Balkans.

Likewise, individual Senators became immensely wealthy landowners and property speculators due to the gains of the war.

It is precisely this more numerous and powerful Senate which after the Punic Wars first employed the Senatus Consultum Ultimum (SCU). Following the death of Tiberius Gracchus at the hands of an angry mob of patricians accusing him of trying to make himself king while he was passing reforms against huge agrarian estates, the Senate realised both that the Republic’s military resources were greater enough to defeat any external threat, and that the disaffected mob of Italian refugees left by the wars constituted the only legitimate threat of insurrection.

In other words, a state of emergency potentially prone to giving power to the people through an elected dictator was insufficient and potentially dangerous to state sovereignty. The fear materialised itself in the form of Gaius Gracchus, the younger brother of Tiberius who worked to finish the earlier reforms as tribune of the plebeians, and who had an armed bodyguard.

In 121 BC the Senate first used the Senatus Consultum Ultimum to declare a quasi state of emergency similar to martial law in modern terms, which gave power to the Consuls to deal with the threat after one of Gaius’ bodyguards commited a murder. The consul Opimius used the army to apprehend and execute Gaius Gracchus along with several others, leading to a total death count of over 3000 Roman Citizens.

This went against the Lex Valeria and ​Lex Porcia which forbid the execution of citizens without a trial, and trials without the right of appeal. When put on trial for his crimes, Opimius agreed that he had broken the law but justified his actions based on the SCU as it gave him greater powers of imperium under a state of emergency and with Senatorial decree. Opimius’ acquittal would serve to set a precedent for future use of the SCU despite it not being a written law nor having any previous precedents.

Framing the legality of the SCU was difficult for the Romans themselves, not least of which Cicero who helped expand and define its powers and even used it in the course of his consulship. Indeed, the way in which Cicero defined the SCU by the example of Opimius’ act was that magistrates could indeed overstep the written laws under senatorial decree in a state of emergency and in the defence of the country.

Yet, the authority of the decree and thus the act itself lay in the hands of the Senate, not the magistrates themselves as their power depended on the Senate’s prior approval. In other words, the Senatus Consultum Ultimum meant that a temporary state of war was declared against internal threats to the Republic, but without a temporal restriction, a clear definition of what constituted an emergency, nor a religious or political precedent as was the case with Dictatorship.

The most scathing critique that one could make, and justly so, is that the SCU served as little more than a means of carrying out extra-judicial killing, as was demonstrated by the killing of another tribune in 100 BC, whose executor Cicero defended in a trial 37 years later.

It would be an understatement to say that the SCU demonstrates a direct step in the extra-legal and political use of emergency measures, but before that can be done one must look at the most famous case of the use of the SCU which demonstrates to the fullest its legal limitations; the case involving Cicero himself that of the Catiline Conspiracy.

The events are described most poignantly by Livy as such: ​”​L. Catiline failed twice in the consular elections. He conspired with the praetor Lentulus, with Cethegus and many others, in order to assassinate the consul and the senators, to oppress the Republic and set fire to the city. An army was prepared in Etruria, but thanks to Cicero’s watchfulness the conspiracy was discovered.​” (Livy 102).

Although there is much to be questioned with this pithy description of events on the part of Livy, not the least of which that it was a description made by Catiline’s enemies after his defeat, there is some truth. It is indeed true that Cataline did fail to win elections and pass reforms, and that he plotted to overthrow the Republic during Cicero’s consulship with an army of 10.000 to 20.000 armed men.

Those prominent politicians with whom he plotted with were indeed caught with incriminating written evidence proving the necessity of the state of emergency which had already been declared.

Yet, it was Cicero’s response at this point in the course that is puzzling. Instead of having the prisoners immediately executed, Cicero proceeded to put their fate upt to a vote by the Senate, thus showing that he saw the act of executing them as being founded on weak legal grounds, and furthermore that their fate was the responsibility of the Senate.

​In response to this, Julius Caesar responded in a speech calling for the life imprisonment of the conspirators, rejecting not the evidence of their crimes but the authority of the Senate to sanction their killing without a trail, even in a time of crisis. Caesar’s response can’t be seen as a rejection of the emergency powers, but rather a rejection of Cicero’s interpretation of Roman law and especially the Senatus Ultimum Consultum giving the Senate authority to determine life and death in emergency situations.

Regardless, Cicero acquired a majority vote in Senate and had the conspirators executed, but when Caesar and several other senators tried to leave the senate house in order to protest the vote, Cicero had them threatened by his armed guards, which was illegal but permissible under the SCU. The last SCU employed was that against Caesar in 49 BC, which started a civil war that would bring the end of the Roman Republic.

Ultimately, the SCU demonstrates exactly what one would expect from emergency measures granting limitless power to a single branch of government and robbing the others of their sovereignty, as it took away the authority of elected magistrates and gave supreme authority to the Senate.

The problem is indeed obvious from a legal standpoint as the Senate has the power to declare a state of emergency, determine the scope of threat needed to justify the measures taken, and judge legality of events. In a historical sense, the SCU served as a means of permanently silencing political opposition and the very threat of it created such fierce factionalism that it regularly incited murder.

Yet all this was carried out on the legal basis of a mere senatorial consultation for which there was not even a written law. Such extreme laxity of political procedure demonstrates directly how the laws and functions of the state can be hijacked by political elites and lead to authoritarianism and autocracy.

 

Conclusion

While these examples may be brief and their impact minimal of the modern question of the role of states of emergency in the rise of authoritarianism, the capacity to learn from them is undeniable.

In sum, Roman Dictatorship provides neither the precedent for resembles in any way the conception of dictatorship established at the beginning, whereas the SCU proves a tentative step toward the path of autocracy in loose interpretation and limits.

The conclusion that can be made from this is that without a sacrosanct guarantee of rights protection and the independent mandates of political offices, the rule of law can be disintegrated from within the government and a democratically elected state can transform into authoritarianism.

 

The photo shows, “Cicero Denounces Catiline,” by Cesare Maccari, painted in 1889.