To those engaged in the decades-old fight against globalism, what occurred in Washington, DC, on January 6th 2021, comes as no surprise. Defeat and tragedy are expectations when the individual must, with unending heroism, contend with institutional power – which in its vastness is both disconcerting and frightening.
But then, no one ever said that heroism was easy. Being heroic means being very lonely, for just when you think you are surrounded by supporters, you find that you’re all alone and must fight alone. Being heroic means never quitting, that you reach deep down inside yourself to tap into strength with which to overcome insurmountable odds, because there is no one to help you. Being heroic means starting all over again when everything falls apart because there are no other options. All this is clearly summed up by Winston Churchill’s observation: “Success is a series of failures.”
What happened on January 6th was certainly heroic – the people asserting their will on those that seek to rule over us. However, as often happens, the people were also betrayed by those who acted as their leaders. When push came to shove, said leaders were well-ensconced in their various safe-places.
But first the defeat and the tragedy. I know several people who went to Washington. Their expectation was that the man whom they had implicitly trusted was gathering them in the capital because he had a trump card up his sleeve, which he would at last reveal – and that he would at long last bring down the hammer to finally right some of the wrongs. In other words, people who came in their hundreds of thousands to Washington – came seeking justice. It was not a show of force, but a show of unity against the tightening vise-grips of tyranny.
Instead, what awaited the people was grim tragedy. Sure, there were fiery speeches, with the right phrases shouted to elicit cheers. Demands were made from those present (“You’ll never take back our country with weakness – you have to be strong!”). Of course, no one explained to the crowd why it had been gathered, let alone what was expected from it, and what being “strong” meant.
But a lot of steam was let-off. And that was it. And that was the only point of the entire exercise. There was no card up any sleeve. Heck, there wasn’t even a sleeve. Once the speechifying was done, the leaders expected everyone to just go home. The cast of thousands was no longer needed; it had served its purpose of being useful props in the grand theater of bravura and aggrandization. The same people I know, who attended, afterwards told me – they felt used.
It was more of the usual. Politicians who talk the talk, but are MIA when it comes to doing something. It’s one thing to speechify. It’s quite another to make what you said in the speech reality. There is an old Latin saying, acta non verba (deeds not words). But then who knows Latin any more… Better to spout than to believe.
Every crowd gathers for a reason – and when that reason is missing, there is confusion, followed by frustration and then anger. That is what happened on January 6th, for there was no real purpose to the huge gathering. It was all just to “demonstrate” some vague show of “strength” to an elusive foe. At best, it was a “feel-good” moment. At worst, it was a grand betrayal of the people.
But the dynamics of what occurred next is very telling. We have all seen the videos of people storming the Capitol and being met with police who did not hesitate to use pepper spray, flashbangs, clubs, and a bullet in one instance. Of course, throughout the summer when BLM and Antifa rioted and burned down cities and businesses, the police shot no one – because the rioters were the right type of human beings. In fact, the police was nowhere to be see and so cities burned and some 30 to 40 people died.
January 6th had to be different, because the wrong crowd had gathered. The police were prepared and ready to use all force necessary – and so four people were killed. In the rapidly shifting dynamic of the headless crowd, the vacuum of being leaderless is quickly filled by haphazard action. The violence only happened because those that had organized the rally had no interest in actually leading it – and so people did what seemed sensible – and this led to the tragedy.
When people got inside the House, they milled about in front of the Chamber, inside which police and security personnel stood behind barricaded doors, guns drawn and aimed at the crowd (who were unarmed).
Suddenly, a solitary shot was heard. A woman, with a Trump flag draped around her, crumbled to the floor. She had been shot in the neck. Here is a video, which is vey graphic (discretion is advised). She likely died on the spot, murdered by a security man inside the chamber who can be seen quickly lunging forward and firing. Why did he feel it necessary to use lethal force? Was he commanded to do so? Why did he chose this woman to fire at? She was targeted, because he only fired the one shot and then vanished. Who knows if the truth will ever emerge? Regardless, the video evidence of the crime is very clear.
The victim’s name was Ashli Babbit, a married, 14-year US Air Force veteran, who had completed four tours of duty. The poignancy is replete – here was a woman who fought for her country and who was then murdered in the halls of her Congress. There were no regrets, however – because she was the wrong type of human. And, of course, the police shot nobody, and no guns were drawn, when the right type of humans stormed the Capitol. “Justice” is always swift when meted out to the wrong kinds of humans.
It is alleged that the other three were killed as a result of police action. But that remains to be seen. A policeman also died; it is still unclear as to how.
If anything good can come out of all this misery, it’s this – there can be no alliance between the system of politics that currently exists in all Western democracies and populism. Why? Because the ideology that fuels the system is progressivism (which is always wrongly labeled as something other than what it really is – why that is an interesting question). And progressivism is innately anti-human, for it must continually overcome those that are deemed regressive. People always get in the way of progress, and so they must be steamrolled.
Society is a great big petri dish in which all kinds of social engineering must forever be implemented in order to demonstrate that progress is indeed being made. This is why the propaganda for “progress” is so relentless.
It’s a simple dynamic really – but a dynamic that is also very poorly understood, and therefore very difficult to fight, let alone defeat. In fact, most people believe in progress and cannot imagine life without it. Things always must get better, and we must use politics to that end. This is also the tragic mistake made by most populists. They do not understand that progressivism is the true enemy of populism – not “Marxism,” “communism,” or “socialism” (whatever these terms still mean). It comes as no surprise, therefore, that populists are forever fighting chimeras.
So, what is the way forward now? It is pointless describing what is wrong, while never saying what to actually do about it. Most people are lost in the playhouse of such description – it keeps everyone busy, while the world is controlled by others. Has that not been the grand theme since 2016 – endless griping about how corrupt everything is, “the swamp,” with no one stepping up to the plate and actually doing something about it? The fact is you cannot use the system to destroy the system. The sooner populists realize that – the further ahead they will be.
To make sure that the deaths of the four MAGA-martyrs are not in vain, this is what populists must do, or start to do. This isn’t easy. Nothing is ever easy – the problem is so vast that populist victories must be small, and they must be incremental.
Here is what I suggest…
Stop complaining. Yes, we all know how bad things are. No one needs more descriptions of how things are falling apart. Of course, it is always easier to criticize than to build. But make an effort to offer hope and encouragement. Don’t traffic in despair. There is a great hunger for vision. True leadership is not about uttering the right slogans and talking point. True leadership is about teaching how to build. In fact, despair is the real “swamp” that is drowning populism.
Stop feeding the beast. Politics is irreparably broken and endlessly corrupt. It cannot be fixed by electing “better” candidates. Instead, learn to create micro-communities. Find ways to grow your own food, create your own electricity, set up your own schools. Learn to control your own lives, rather than relying on the government. Government-control is always tyranny. Build shadow economies so you can stop feeding the system with your taxes, your effort, your ideas and your labor.
Stop being compliant. The system does not work for your benefit. It exists to dominate you. Find ways, no matter how small, to resist. Learn to mark your independence by becoming truly ungovernable. The easiest way is to stop funding political parties with your money. And for Heaven’s sake – do not vote for any of their candidates. Why support the elite who have no interest in you? Unite against their governments, their systems. If you must be political then pool your talents and start a populist party and try to win local elections with your candidates. This is the long-march. Do not look for instant solutions – because there are none.
Stop supporting crony capitalism. Learn to be entrepreneurial. Understand the function and purpose of big money, and find ways and means to subvert it. The easiest way, for example, is to stop supporting mega-corporations – they are all tyrannical. This may sound like complete heresy, but cancel all your social media accounts. Stop shopping at big-box stores. You’ll be the happier for doing so. Do not let large companies define the meaning of your life.
On a positive note, get in the habit of looking for beauty, say, in music, in painting, in gardening, in woodworking. Add to the beauty of the world – no matter how small. Do not let mega-corporations hijack your time.
If, as many are predicting, January 6th is the start of a revolution – make sure it’s the right one. Do not get sucked into the rhetoric of others, who will use you for their ends.
Also, make sure you understand that true revolutions are not political; they are moral and spiritual. Good politics can only be the result of good morals. Looking for good politics first is a fool’s errand. There must be something unchanging and constant to guide human destiny. That is true populism, which clearly understands that human worth can never be defined by political agency.
It’s a tough slog ahead. We will need a lot of populism to get through it. Do not lose your way. Do not lose hope. Build your own populism. That is true liberty. That is true heroism.
C.B. Forde, a former academic, lives in a rural location, where he practices what he preaches.
The image shows, “Der Sämann” (“The Sower”) by Albin Egger-Lienz, painted in 1903.
In November 2020, I helped to organise the Burlington Magazine/ Public Statues and Sculpture Association (PSSA) Webinar on “Toppling Statues.” It was a massive success, with speakers of a multiplicity of political views, representing multiple nationalities and ethnicities, multiple professions from curators to politicians to artists, with anything from Confederate monuments to Rhodes and Colston in Britain to the contemporary Philippines covered in the papers. I am publishing my own paper here and am most grateful to Nirmal Dass and the Postil Magazine for making this possible.
1. The Rule Of Law
Kudos to Sir Keir Starmer, the British Labour Party’s best leader for 25 years, for saying that Black Lives Matter protestors were “completely wrong” to pull down Edward Colston’s statue in Bristol, and if they advocated this, due process should have been followed. I was forcibly reminded of W.B. Yeats’s famous quotation: “Things fall apart. The centre cannot hold… The best lack all conviction, while the worst are filled with passionate intensity.” These were the parting words of my teenage hero Kenneth Clark’s Civilisation, reflecting on its fragility.
Had I been present at the scene, I too would have remonstrated with the protesters, demanding: “Don’t you know your Locke? ‘Where there is no law, there is tyranny.’” I rest my case, even if the likely rejoinder would be a word half-rhyming with Locke. Another important Lockean precept is the sanctity of public and private property in civil society. Colston was not the crowd’s to wrench off its base and toss into the water. “The law of nature hath obliged all human beings not to harm the life, liberty, health, limb or goods of another,” here the people of Bristol and their statue.
2. Have We Got Colston Wrong?
According to an eminent British historian who must remain anonymous, as opinions are so charged and friendships can be lost – yes, we have. They say this:
“Colston is less culpable than his public reputation has made out. Commentators on both sides describe him on the news as a ‘seventeenth-century slave trader’ pure and simple. He was not: he never ran a slave trading business himself and never made major investments into the trade or drew a steady income – even a minor one – from it. Instead, he made a fortune from trading in other commodities, though twice in his life he became a lesser shareholder in slave-trading voyages launched by others. This was – for whatever reason – not an attractive experience for him because he did not continue it. Instead he became the greatest philanthropist in Bristol’s history, the merchant who did most to help his fellow humans. In particular he ploughed back his huge fortune into three enterprises. One was a school where poor children could receive a free education good enough to enable them to rise in society. Another was a hospital, where those who could not afford medical fees would be treated for no payment. The third was a set of almshouses where elderly poor people were given comfortable retirement homes, each with their own flat. All three survive to the present day. I presume that the school was initially just for boys, but it has long taken girls as well, and all three institutions have lately benefited people from all ethnic groups. The late Victorians – themselves much concerned with finding ways of attaining better social justice – gave him a statue in gratitude for them. I myself think that his contribution to human misery, by those ill-chosen investments, is balanced by his efforts to relieve it in other ways.”
So, even an offending statue demonstrably has a far more complex sub-text once we’ve done our homework. Don’t let your opinions gallop ahead of your knowledge. Be a curious and respectful “pastist,” not a judgemental “presentist” – and remember that was then, this is now. I’ll return to this shortly.
3. Do We Ignorantly Bad-Mouth The Victorians? Are We Willfully Ignorant About Statuemania?
Yes and yes. Remember that not just Rembrandt or Andy Warhol but public statuary is art too, art which excels both in quantity and often quality. Before modernism did so much to de-skill art, if you had the standard training through a sculptor’s studio, art school or a large firm like Farmer & Brindley, your work attained a remarkably proficient technical level. Your attitude to imperialism was immaterial. Harry Bates, a working class, arts and crafts trained sculptor, could make a number of rather fine imperialist monuments.
What mattered was whether you could literally hack it. Very few of the myriad Victorian and Edwardian public monuments could be called inept. What has this got to do with toppling statues? Lots. Scratch a toppler and you’ll find they are with few exceptions ignorant of, or hostile to, Victorian art, whatever the quality. Professor David Olusoga has many interesting things to say about the politics of imperialist statuary but reveals disappointingly little art historical knowledge of, still less aesthetic responsiveness to, the works in question. Remember we’re dealing with art here, not disembodied political texts.
Talking of great Victorian art, earlier this year, I pointedly refused to sign an open letter organised by Australian academics, curators and cultural commentators, demanding the relocation of Captain James Cook’s memorial in Hyde Park, Sydney to a museum. Perusing the signatories, almost without exception, they were modernists or contemporary buffs; the number who knew anything about Cook’s sculptor, Thomas Woolner, and Victorian statuary was perhaps two or three, and they probably cared even less.
4. Beware Of Presentism!
Historically, topplers are deeply into presentism, which is worse than the Whiggery from which it derives. Presentism involves the wholesale application of present-day values, e.g., deploring slavery and racism, to a very different and often resistant past – a foreign country. Imagine if we could travel back in time in the Tardis just 60 years to Gilbert Ledward and his immense – and rather beautiful – Africa Awakening relief for Barclay’s Bank and confront him with a criticism made by a South African friend who should have known better, that it was “patronising.” Ledward would not have been offended, so much as completely baffled and bewildered. We have a nerve to assume we know far better than our equivalents in 1960 or 1860. What will they be saying about us in 2060? The Ledward relief badly needs a new home, but sadly is suffering for its – and his – whiteness.
5. How About A New Empire/Colonial Museum?
A possible new home for relevant statuary could be a UK Empire Museum, a museum of Imperialism if you like. Formerly there was one in Bristol (the British Empire and Commonwealth Museum), but the director’s conduct 10 years ago led to his dismissal and the subsequent liquidation of the museum; that’s another story. I was saddened at the time that they threw out the baby with the bathwater.
William Dalrymple is a prominent advocate of such a museum and I agree with him in principle. My main reservation about both Dalrymple and the prevalent political climate is that if established today, the museum would almost certainly be instantly dominated by decolonising “woke” forces, the Edward Saids of this world rather than the Robert Irwins (or Mark Stockers!). Politics – and Britain’s dire economy – conspire to put such a putative museum on hold, but let’s not lose sight of it. The museum could indeed serve as some kind of repository for victims of statue toppling or shifting.
6. Problems With Museums
Should offending monuments go to museums, as sometimes relative moderates in this debate argue? To contradict my previous point, mostly the answer is, No. How come?
Firstly, the basics – museums worldwide are critically short of storage space and offering them a 3-metre-high statue plus pedestal would exasperate any reasonable collection manager.
Secondly, Colston aside, and even Colston before June 2020, Robert Musil’s famous dictum that there is “nothing in the world quite as invisible as a public monument” held good and perhaps should still do so. It’s not as if a monument’s offensiveness will suddenly be dispelled by its more prominent location and visibility within a museum. The arguments against it won’t miraculously stop – or still more miraculously become more intelligent.
Thirdly, having a Victorian worthy or three in your atrium would almost certainly clash aesthetically with any desired installation of art after c. 1920.
Fourthly, which explains why any proposed relocation of Cook to a museum is crass, how can you possibly do justice to the modelling, the aspect, the halation, the everything really, of a colossal four-metre-high statue on a seven metre columnar base? It would dwarf its new setting, whereas its original location, carefully envisaged by Woolner, is ironically too commandingly successful and dramatic. Cook pays the price in today’s fraught political climate.
Yet a museum just might be a suitable location for a work like Francis Williamson’s statue of Sir George Grey in Auckland. Despite its te reo Maori pedestal inscription translating as “The Future will be grateful for thy universal goodness,” it wasn’t. Grey was decapitated by activists in 1987, while in recent months his replacement head, together with fingers, have been vandalised and his body daubed with paint, in obviously crude copycat actions. Marble is particularly vulnerable, Grey with his fairly recent head still more so, and in the absence of alternative measures a museum could provide an appropriate refuge when out there in Albert Park he’s too much of a risk to society.
7. Copycat Activism
I take a dim view of copycat attacks or calls to defund the police. Just as statuary needs to be appraised on a case-by-case basis, so do the historical records of respective nation states. New Zealand’s colonial past rendered deep injustices to Māori, but these should not be equated with the US’s brutal past. I said this in response to the New Zealand historian Professor Tony Ballantyne when he advocated removal to museums of figures “who propelled colonialism and whose values and actions are now fundamentally at odds with those of our contemporary communities.” I demanded to know “which statues does he mean?” and Tony didn’t answer me. The great white Empress Queen Victoria obviously upheld the Empire but was not racist, and her carving at Ohinemutu was honoured and indeed appropriated by the Ngāti Whakaue sub-tribe, placed on a splendid post and sheltered by a canopy. In Canterbury province, J.R. Godley established a colony which deliberately sought to avoid conflict with Maori and is immortalised in another outstanding statue by Woolner.
Sir George Grey’s role is highly equivocal, reviled in his lifetime by some Maori, eulogised by others; working closely with his friend Te Rangikaheke, he recorded Maori legends, traditions and customs, doing much more here than most academics today. The list goes on, and I concluded: “We should think twice before we violate our legally protected heritage.” Famous last words – but heated discussion has definitely died down locally.
8. Not Everyone Has It In For Statues
The art critic and cultural commentator Alexander Adams has noted the merciful immunity from iconoclasm in the European continent, which views woke excesses with intelligent scepticism, and the perceived heritage value of its historical monuments prevails over politics. President Emmanuel Macron has explicitly stated that France won’t indulge in tearing-down operations, while Ian Morley’s paper has just explored the refreshingly different attitude in the Philippines. Perhaps this is yet another unfortunate instance where the exceptionalist British world, as seen in Brexit, sets itself apart and tears itself apart.
An irony of the peaceful BLM demonstration in Wellington was the crowd gathering under the watchful eye of Thomas Brock’s parliamentary statue of R.J. Seddon, New Zealand premier from 1893 to 1906. While his relations with Maori were benign, Seddon’s racism towards New Zealand Chinese today appears disgusting: he denied them state pensions, imposed stiff poll taxes on them and called them racial “pollutants.”
I asked a good friend who is a Professor of Chinese if Seddon should go. She replied: “I’m probably more conservative than you on this issue. For me, we should leave the statues alone and they are only and can only be partial representations of history. Destroying statues doesn’t destroy historical injustice or biased historical narratives. Besides, historical fashions come and go. The Russians and the Chinese have destroyed enough statues but failed to rectify any historical wrongs. So, for me, debate historical figures and events as much as one likes but leave material historical remnants alone. I guess that also answers your question about Seddon. The statue can also enable a conversation about racism in NZ.”
Wise words, don’t you think?
Statues and monuments are art, they are heritage – and sorry, Professor Richard Evans, as a historian you need to realise they are also fascinating and insightful, highly charged historical documents. And unless they are Gilbert & George, statues can’t answer back when abused by the crowd. What we should do with them will be addressed by subsequent speakers, but I personally advocate additional plaques or virtual ones through QR codes and apps to spell out the case for people’s perceptions today. Conciliation not confrontation, love not war, and thank you Church Monuments Society, don’t expunge, explain. And, last but not least, heed the watchword of the PSSA, “retain and explain.”
Dr. Mark Stocker is an art historian and art curator who lives in New Zealand. His publications are on Victorian public monuments, numismatics and New Zealand art. His recent book, When Britain Went Decimal: The Coinage of 1971, will be published by the Royal Mint in 2021.
The image shows, “Pulling Down the Statue of George III in New York City,” by Johannes Adam Simon Oertel, painted in 1859.
In Return of the Strong Gods, R.R. Reno makes this perceptive point: In “the second half of the twentieth century, we came to regard the first half as a world-historical eruption of the evils inherent in the Western tradition, which can be corrected only by the relentless pursuit of openness, disenchantment, and weakening. The anti- imperatives are now flesh-eating dogmas masquerading as the fulfillment of the anti-dogmatic spirit.”
Reno is entirely on the mark when he tells us that fighting the “dogmas” of the past has become integral to our political culture. In Western countries, particularly in Germany, this struggle has taken the form of a perpetual battle against fascism, which has become the hallmark of what is supposedly an almost uniformly evil Western past.
In pursuit of a fascist-free society, social engineers in government and education have trampled on freedoms and engaged in nonstop persecution of those who are regarded as standing outside of necessarily leftist, political conversation. In a book, in press with Cornell University, I undertake to show how utterly pernicious antifascism has become and how little it has to do with what was once understood as fascism or Nazism.
Antifascist bigots manufacture their own dogma to combat alleged “fascist” intolerance, which by now means disagreeing with progressive gatekeepers. “Fascist” is also attached to anyone who doesn’t march in lockstep with the antifascist elites and who therefore must be destroyed socially and economically to protect a multicultural society.
Where I depart from others who may agree with these premises is in my insistence that certain political developments promoted the antifascist anti-dogma dogma. The Russian or Japanese government does not push this as a state ideology. Only Western countries promote this stuff; and there may be something specific to these societies that predispose them toward antifascist crusades.
All antifascist countries have undergone similar political and educational experiences, although the Germans previewed this transformational process with the “democratic” reeducation inflicted on them by their American and British conquerors after World War II. Such conversions are not attributable simply to the “spirit of the age,” or because of moralizing on the editorial pages of the New York Times or Le Monde. The conversion to antifascism as a state-enforced doctrine, particularly in Western Europe, has happened incrementally for quintessentially political reasons.
Although we can trace back this antifascist fixation to an earlier point, the best time to start may be after World War II, when there were already an entrenched concept of antifascist education and a new stress on universal rights that were to be globally enforced. In the 1960s first in the US and then in other Western countries, an expanding crusade against racism and its supposedly twin evil of sexism was taking place. This enthusiasm gained in intensity, partly under state sponsorship, and brought along a government apparatus to fight “prejudice” and “discrimination.”
In the US, this undertaking was bound up with fighting racial discrimination, while in Europe it built on the struggle against Nazism and colonialism. But this crusade, once begun, just went on and on. It became ever more intrusive and obsessive and enjoyed the support of myriads of administrators, an expanding media empire and a state-subsidized educational establishment.
Throughout this conversionary enterprise, the focus has been on a constant enemy “fascism,” but never communism; and those who have wielded power have conjured up the specter of Hitler, or his right-wing stand-ins to underline the perpetual threat to democracy.
The point to be underlined is that identifiable turning points and coercive policies contributed to the problem that Reno mentions. There were also actors who helped advance a particular model of “liberal democracy,” which supplanted other less controlling and less radical forms of constitutional government.
Certain variables might have made this all-controlling ideology less oppressive and less pervasive, e.g., fewer young people having their brains laundered at our universities by madcap academics, a smaller electorate made up of long resident, literate property-owners, and the absence of a centralized administrative state that in the name of social policy set out to reconstruct the family.
All these factors, and other discernible ones, contributed to the rise and continuity of our antifascist state religion. Here concrete interests came together with ideology in a war against “the evils inherent in the Western tradition.” These evil supposedly culminated in fascism, understood as both Nazi tyranny and whatever obstacles the cultural Left was opposing at a particular time.
A major vehicle for this new dogma has been the managerial class ensconced in both government and corporations. Although this class could conceivably profess non-radical values, this is not the case at the present time. Its members are out to expunge such values and the culture that created them.
Admittedly the “flesh-eating dogmas” we are addressing make no sense to me as a coherent, demonstrable set of beliefs related to any reality that I can grasp. But I am interested in knowing who is benefiting from this crescendo of madness. The old question of “cui bono?” remains relevant here.
A simple song, but it contains a good thought… (Adam Mickiewicz, Dziady, Part IV)
I should warn my readers at the outset that the topic of this piece is not my area of expertise. I am not an avid fan of Queen, and my knowledge of rock and roll is no different from others of my generation and those who spent their youth enjoying this type of music. I also haven’t seen Brian Singer’s 2018 film Bohemian Rhapsody, and I am unlikely to have time to see it anytime soon.
Although it was never more than just fun for me, two friends whom I played football with after school in the 1970s later became well-known music journalists in Poland. My friends would meet in the evening at one of the student clubs in Krakow, and listen to records together. One of the two, who later became a music specialist, received these records from an uncle in London – they would come in packages that contained clothes for the family and other items that were hard to get in a socialist country. Sending such packages was also typical of post-war immigrants.
It so happens that I also had an uncle who helped us, and who invited me to Hanover in 1979. At that first trip out from behind the Iron Curtain, I brought back three CDs that were not available in our country. One of them was Queen’s double album, Queen Live Killers, with many hits that were hugely popular at the time. Over subsequent years, Polish Radio began to broadcast this type of music in programs for young listeners. These programs were highly popular. And, I can still remember the first appearance of “Bohemian Rhapsody” on Polish Radio and even the comment of the journalist who hosted the program, who said that the “new, little known” band Queen is “very skilled vocally.”
This truth was confirmed in the following years, when Freddie Mercury and his bandmates celebrated their greatest triumphs, and “Bohemian Rhapsody” won numerous accolades from listeners around the globe. This song, known to everyone, recently came back into my head again by accident. I was preparing a lecture on romantic ballads for my students at the Jagiellonian University, and it occurred to me that the words of “Bohemian Rhapsody,” especially the opening part of the song, correspond exactly to one of the most popular traditional folk ballad patterns. The hero of “Bohemian Rhapsody” (Freddie Mercury) complains to his mother that he has shot someone, so if he’s “not home tomorrow,” she should “carry on.” His life “had just begun,” and now he’s gone and thrown it all away.” He then speaks of “shivers down my spine” and his “body aching all the time.” He says goodbye to his friends because he has to “face the truth,” alone. And although he “doesn’t want to die,” he sometimes wishes he’d “never been born at all.”
Even for the listener who knows that the subject of crime and punishment constantly appears in ballads of all eras and in all countries (from the Polish Romantic poems of Adam Mickiewicz to the songs of the American Johnny Cash), Freddie Mercury’s lamentation sticks in our heads, hitting us hard; the piano keyboard sounds surprisingly serious.
Even stranger thoughts come to mind, if you listen to the lyrics of the middle section of the song, a quartet sung by all the band members. This quartet breaks the continuity of the ballad story with a monumental scene of judgment over the hero’s soul in the afterlife. The operatic associations suggested appear not only in the musical layer, but also in the text, in which individual Italian words stand out (“Figaro,” “magnifico,” and others). But this is not just a reference to Italian as the language of opera. It is also a trace of Catholic religiosity. The “Galileo” that Freddie asks to “let him go” is not Galileo Galilei (1564-1642) the famous physicist persecuted by the Church and the hero of the progressive education we received in the 1970s. He is the “Galilean” – Jesus Christ, whom the hero asks for freedom from this monstrosity, accompanied by a choir asking angels for his release (“Let him go, let him go, let him go…”).
Similarly, the “mama” Freddie invokes when he cries out “Mama mia” – after the chorus of Hell spirits declare, “We will not let you go” — is also not the mother of the protagonist from the first part of the song, but the Mother of God, whom Freddie calls in his hour of death, as does every Catholic. Of course, with these terms (“Galileo,” “mama mia”), the entire religious morality play is camouflaged and parodied here. Freddie plays to his judges for pity, complaining that he is only a “poor boy” and the backing choir adds that he is “a poor boy from a poor family” – as if hoping that “Galileo” will give him credibility points for his humble origin. However, mixing seriousness with irony in this part does not change the essence of the outcome: the punishment of the hero is condemnation – “Beelzebub has a devil put aside for me, for me, for me ….”
And at this point, in the transition to the third and final part, the ballad convention is finally broken. In ballads, crime is always accompanied by punishment. This “law” is accepted by everyone, including the punished hero, because these are the moral foundations of traditional society and ancient popular culture. Meanwhile, in its dynamic ending, “Bohemian Rhapsody” expresses a vehement rejection of this judgment. The soloist breaks the bonds that had bound him thus far (during the performance of the song, Freddie Mercury emphasized this with appropriate behavior on stage) and throws out – against God – rebellious, well-known Promethean accusations:
So you think you can stone me and spit in my eye, So you think you can love me and leave me to die. Oh, baby, can’t do this to me, baby…
Addressing God as “baby” is a special idea. I don’t know (although perhaps it should be checked) if Shelley and Byron came up with something similar. So now by freeing himself from his guilt, from reproach, from the Last Judgment and by throwing his accusations back on his Judge, the hero of “Bohemian Rhapsody” becomes both the modern Prometheus and Don Juan. Since judgment no longer has any authority for him, the difference between good and evil ceases to matter. The phrase “nothing really matters” changes its traditional meaning, as expressed in the first part of the song. Now it means the state of ataraxia promoted by libertine philosophers: “Nothing really matters, anyone can see, nothing really matters… to me.”
A strange song. Sweet and bitter; simple but full of hidden allusions, mixing buffoonery with seriousness, and seriousness with irony and mockery. Cheap? Pretentious? And is this important, since the song has conquered the world? The story told in “Bohemian Rhapsody” corresponds to that of Don Juan from Mozart’s opera. Only that Molière and Mozart showed in their works the horror of sin and the justice of the punishment that befell Don Juan. But the sinner condemned in our song, the self-pitying “poor boy” in the end becomes a rebel against harsh moral law. He declaims a manifesto of self-liberation from the shackles of religious morality and gives others a model to follow.
We couldn’t understand all of this as teenagers. We swayed to the beat of the song, glad that the words were sonorous and matched the music. Music that released our youthful emotions and provided a sweet purification from the fear of life awaiting us. Now that we have more experience, in the seemingly nonsensical flow of “Bohemian Rhapsody,” we find something from our later experiences and thoughts. Something that was already in the song from the beginning and which is probably not at odds with maturity. Undoubtedly, 40 years ago Freddie Mercury knew much more about serious matters than we could have imagined then as teenagers.
Today we are no longer “poor boys from poor families,” as we used to be. We may not be completely innocent either; but that doesn’t bother us too much, since we have rejected the religious superstition that Galileo will judge us someday for all that we have done. Anyway, even if he could judge us, he would have to show us that he has the right to do so. Isn’t that the moral history of the entire modern West, especially the West in the age of pop culture? It may not be that “nothing really matters” to us – but certainly nothing matters to us the way it used to. Unfortunately.
Andrzej Waśko is professor of Polish Literature at the Jagiellonian University, Krakow. He is the author of Romantic Sarmatism, History According to Poets, Zygmunt Krasinski, Democracy Without Roots, Outside the System, and On Literary Education. The former Vice-Minister of Education, he is curretnly the editor-in-chief of the conservative bimonthly magazine Arcana and is presently Adviser to Polish President Andrzej Duda.
The image shows, “Bohemian Rhapsody,” by Dan Sproul, 2019.
Characteristic of the work of the late Greek scholar, who spent a large part of his life in Heidelberg and who mostly carried out his conceptual drafts in German, was his interpretative starting point based on claims to power. In contrast to other historians of ideas and social affairs, who are inclined to moralize, Kondylis never fought for the “good.” Although influenced by the Enlightenment, his editor, Falk Horst, is right when he speaks of a philosopher of the Enlightenment without a mission.
Kondylis dissected successive world views based on the Middle Ages; but he undertook his task as impartially as possible. He called this approach “descriptive decisionism,” which he differentiated from value-based understandings of human decisions and claims. And he called the scientific approach, which he pursued in his mature works, as “social ontology.”
First and Foremost A Social Being
Kondylis starts from the basic assumption that the human being cannot be separated from a certain social relationship. From his point of view, man is primarily a social being, whose relationship to fellow human beings and to the world outside must be taken into account through his position in a given hierarchy.
First and foremost, one takes care of self-preservation, which requires the cooperation of others, and then of defending one’s niche against opponents. In his small volume, Macht und Entscheidung Die Herausbildung der Weltbilder und die Wertfrage (The Formation of World Views and the Question of Value ), Kondylis focuses on the combative and power-striving side of interpersonal interactions.
Moralism or Nihilism
Fundamental to his lengthy books on the Enlightenment, classical conservatism and the age of world politics is his use of a power-oriented perspective of interpretation. Even with scholarly disputes and strictly developed theoretical work, a fighting spirit that shapes all concerns can be discerned. The scientist sets his thesis against that of his opponent.
The Enlightenment thinkers set out to assert nature and sensuality against a medieval way of thinking. But the former participants went their separate ways when the question arose whether the struggle against the rejected metaphysics should result in a normative morality or in a nihilism that decomposes everything. The Enlightenment thinkers, the advocates of a rational morality like Kant and Voltaire, and nihilistic materialists like Holbach and La Mettrie, split into two opposing groups of thought.
The bourgeois society, which upheld the cultural world of the Enlightenment, had to wage a two-front war against conservatism, which wanted to reassert the ideals of a premodern social order, and against mass democracy, which advocates the equality and exchangeability of the crowd. Without considering this dialectical, militant thrust, Kondylis believes, successive ruling classes and leading ideologies can hardly be understood. Only with regard to a counterpart does the individual develop collectively, as being-like and abstract.
A Synthesis of Marx And Carl Schmitt?
Kondylis’ social ontology and anthropology is usually interpreted as an imaginative amalgamation of the thought of Marx and Carl Schmitt. It may be astonishing that Kondylis recognized Marx, but far less Schmitt, as a pioneer. He also generously admitted as influences in his world of ideas both Reinhart Koselleck, with whom he had a long-term correspondence, and his doctoral supervisor from Heidelberg, Werner Conze. He also mentioned Spinoza, whose theological and political treatise helped shape his concept of power.
But why did Kondylis treat Schmitt, whose friend-foe thinking he shares, almost neglectfully? It may be that Kondylis wanted to emphasize the originality of his terms. Just as relevant, as Horst’s anthology makes clear, Kondylis was radicalized in his youth when he had protested against the Junta of the colonels in his Greek homeland.
The Marxist character can be traced back to these youthful years, even if the mature thinker could hardly be classified as a Marxist or as a leftist. The focus on the course of history and socially determined major cultures point back to a Marxist-inspired focus. It is clear, however, that Kondylis like Koselleck and other leading German historians of ideas from the second half of the last century, was influenced by Schmitt.
The fact that Kondylis handled a single-track or overly simplistic view of the world is a common criticism of his anthropological and political perspective, which revolves around self-preservation and striving for power of the socially settled individual. But that presupposes that the social researcher Kondylis wanted to provide an overall picture of political, community and ideological action. Instead, his ideas can be used to shed light on human behavior and to provide insight into human motivation in individual situations.
Not An Optimist
For all his devotion to the Enlightenment and the associated insights, Kondylis by no means represented the optimistic view of the future which shaped eighteenth-century rationalism. He belonged to the group that Zeev Sternhell and Isaiah Berlin characterized as “les Contre-lumières,” and which were supposedly up to no good. These brooders used the critical approach of the Enlightenment to question and even devalue their final vision.
In other words: Kondylis understood his teaching assignment differently than the moralists he mocked. Apart from the decision-making of socially located and motivated individuals and groups, who act in the area of conflict, with other similarly determined beings, Kondylis cannot offer us a world-picture or a vision of the future. To his credit, he warns against those whitewashers who want to abolish our freedom and our sobriety.
In their eternal quest to remake reality, a perennial target of the Left is the family: man, woman, and children, the bedrock of all human societies. The family, by its existence and by what it brings forth, mocks the Left project, and so the Left has tried to destroy it for 250 years. But only in the twentieth century did this effort gain real traction, when our elites became converts to the fantasy that sex roles as they existed were artefacts of oppression, not organic reality. What followed was mass indoctrination in falsehoods about men and women, in which this infamous book played a key role. If you see a sad wine aunt (they are all sad), and you see them everywhere, you see a small part of the resulting social wreckage.
The Feminine Mystique was chosen in the 1960s, the decade that really began our decline, as the central pillar of the enormously destructive myth that a woman can “have it all” – both a fully-realized family in the home and a fully-realized career outside the home. Many elements of our present ruin can be traced back to this propaganda. The myth itself is duplicitous, however. For its purveyors, a woman’s career is far more important than the family – lip service is only paid to the family because women keep stubbornly insisting they want a family. To their great frustration, this is a problem our rulers have been unable to solve, causing them to resort to ever more extreme and ultimately self-defeating falsehoods about men and women. It would be funny if it had not been so catastrophic.
I could spend hours amusing myself blowing holes in this execrable book, but I have sworn off reviewing books merely to show how they are wrong. Therefore, we will instead use this book to discuss some of the defects in societal structures in America today as they relate to men and women, and how those structures should be remade. A sneak peek: men and women are very different. They always have been, and they always will be. And from a societal structure perspective, the crucial truth is that men drive a society forward, while women bind a society together. So, it will always be in any successful society, and any society that attempts to contradict truth will only find its own obliteration.
But you will be disappointed, I am sure, if I do not at least summarize this book, and doing so is helpful to frame discussion about recapturing our future. It’s not easy – a reader has to excavate in layers, removing all the primitive psychobabble and 1950s ephemera. Moreover, he must reconcile himself that there are no hard facts in this book with which to grapple. None. It is purely a series of cherry-picked anecdotes, presented in a pseudo-scientific manner in order to compel conclusions the author, Betty Friedan, had already reached about society.
She was born into and raised in a far-left family, and from her earliest youth to her death in 2006 worked unceasingly to impose on our society all her radical politics. Agitation was her life. In 1957 Friedan, bored with her part-time job writing for the radical press and unhappy with her marriage to an advertising executive, sent an amateurish questionnaire to her classmates from her 1942 graduating class at Smith College (an all-women’s college still extant).
The survey had thirty-eight questions, all yes-no or multiple choice. None are surprising or all that interesting, and the survey is loaded: the desired responses are indicated by the choice of questions and by using guiding adjectives (e.g., “Is your marriage truly satisfying?” – meaning that unless it is truly satisfying, the only possible answer is “no”). Friedan claims that the responses surprised her, so she then conducted interviews with eighty women. Upon the supposed results of these interviews a book claiming to show a new understanding of all of American society is built.
What, then, is the “feminine mystique?” It is the “strange discrepancy between the reality of our lives as women and the image to which we were trying to conform.” “Our” and “we” here mean a small set of women very similarly situated to Friedan, but in a neat sleight of hand, Friedan manages to pretend that “our” and “we” is all American women, or at least all educated, married, upper-middle class American women. (Working-class women receive a grand total of zero words in this book, other than a suggestion that career women hire cleaning women. LGBTQQIP2SAA people get more attention, at least – in the form of Friedan’s complaint that bored women without careers turn their sons into homosexuals).
According to Friedan’s “data,” women are “unsatisfied,” even though they objectively had gotten everything they wanted. They have “a hunger that food cannot fill.” They all say “I want something more than my husband and my children and my home.” The “mystique” is the supposedly-false belief that they don’t have a hunger, that they don’t want something more, but are instead very happy, or at least satisfied, with traditional sex roles, the “image to which we were trying to conform.”
OK, then, what do women actually want, if it’s not family and home? Well, Friedan meanders a lot, but basically she tells us women want self-fulfillment through “the life of the mind and spirit.” So, do we all, I suppose, but to Friedan, this means a job, any full-time job, outside the home – nothing more. A housewife, that is, a woman who raises children, has a sound marriage, and acts feminine, but does not work full-time outside the home, is a sad and contemptible person in Friedan’s eyes.
In an early instance of the scientism that has, during the Wuhan Plague, swallowed the world, Friedan lectures us that “In [the] new psychological thinking… it is not enough for an individual to be loved and accepted by others, to be ‘adjusted’ to his culture. He must take his existence seriously enough to make his own commitment to life, and to the future; he forfeits his existence by failing to fulfill his entire being.” This piece of infantile babbling is illustrative of the entire book.
Friedan faces a problem in selling this story, though, which she grudgingly admits – all other contemporaneous surveys showed that what women actually want is to be a housewife. This makes Friedan angry. She is greatly offended that at a time when more and more women are getting college degrees, an ever-higher percentage of women show no interest in a career.
But there is an easy answer! They are not lying; they have been tricked. They have been bamboozled by women’s magazines written by men, which exist to sell them products they will only buy if they are kept in the home, just like Adolf Hitler did, you know. If these poor, deluded women could only be objective, they would all know they suffer “terrible boredom,” which can only be cured by working outside the home.
Without a career, you see, a woman can have no identity at all; she is “barred from the freedom of human existence and a voice in human destiny.” She’s also “doomed to be castrative to her husband and sons” (a clear instance of projection by Friedan, who was nothing if not that to her own husband and sons). But good news! Friedan has uncovered the “truth” that has escaped us all.
The rest of the book, 500 sophomoric, tedious pages in all, is terrible. Repetitive anecdotes interspersed with bad history; cut-rate Freudian analysis (Friedan can’t get enough Freud) that no doubt seemed very daring at the time; praise for the ludicrous and discredited Margaret Mead’s fantastical lies about sex relations in primitive cultures; claims that colleges are failing women because women don’t choose the same subjects as men; demands for population restriction; psychological drivel about nuclear weapons; praise for the silly Dr. Spock; comparing the position of American housewives to that of inmates in Nazi death camps; endless pushing the idea that women are kept in the home so they will buy things (ignoring that they can buy a lot more things if they work outside the home); lecturing the reader that women forced to be housewives “offer themselves [sexually] eagerly to strangers and neighbors” because they’re so bored; and numerous variations on the claim that any woman without a career is infantile and prone to “severe pathologies, both physiological and emotional.”
All this is gloriously evidence-free; Friedan’s usual technique is to make a sweeping statement, quote from an (always anonymous) “expert” supporting her, and blare triumphant conclusions.
The author’s contempt for children permeates the book. The only thing worse than a woman who wants to stay home and make her and her husband a happy home is one who wants to add children to her living nightmare, which only seems like a dream to her because she can’t see as clearly as Friedan. She herself threw over her family, including three children.
In an Epilogue, written in 1970, Friedan crows about how wonderful the reception to her book was. As a result, she “finally found the courage to get a divorce,” from which she concludes that “I think the next great issue for the women’s movement is basic reform of marriage and divorce” (the wreckage of which we can see all around us today). She herself has moved into “an airy, magic New York tower, with open sky and river and bridges to the future all around.” She has “started a weekend commune of grownups for whom marriage hasn’t worked – an extended family of choice, whose members are now moving into new kinds of marriages.” She does not mention that she conducted a long affair with a married man (who refused to leave his wife); it seems likely that, like John Stuart Mill, she constructed an entire philosophy around justifying her own bad behavior.
You get the idea; there is no need to continue examining the details of this book, the pages of which are only useful to line birdcages. This is all propaganda, which we have been fed so long that we believe it as history. As with other, slicker propaganda, such as the television series Mad Men, it portrays a set of falsehoods, laced with enough true background facts to pacify the reader eager to agree and comply. (It is always crucial to remember that much of what “everybody knows” now about many periods in the past is simply lies, and there is no better example of this than the 1950s and 1960s, in nearly every facet of their history, fed to us through our screens). Boring. Let’s talk instead about what a well-run society would look like.
But first, let me expand my thinking about why this book “succeeded” in its goal of massive social change. As with all major social changes, mere propaganda is not adequate explanation. The propaganda was successful because it hit our society at precisely the right moment, when it was open to the infection. First, emancipation was in the air; as Yuval Levin discusses at considerable length in The Fractured Republic, the 1950s were a unique moment in American history, when it falsely seemed like everyone could have unlimited freedom without cost, and this belief was not confined to those on the Left, but permeated society.
Second, and tied to the first, intermediary institutions, and the thicker web in which families were set, had already evaporated. Housewives, at least the suburban housewives who are Friedan’s sole focus, were in fact very frequently alienated and atomized, because the organic social structures that had supported both men and women had declined sharply (and would disappear entirely, as Robert Putnam narrated in Bowling Alone). These women did have more free time as the result of labor-saving devices; Friedan claims work expands to fill the time available – but the real problem is that given their removal from the thick social structures of previous decades, free time had no satisfying social outlet, giving Friedan’s explanatory fantasies a surface appeal, like a poisoned apple.
Third, and perhaps most important, the Left goal of destruction of the family fit precisely, in this case, with the unbridled capitalism, the excessively free market, that has worked hand-in-glove with the Left for decades to destroy our society (aided by the government). As a result of this book, or rather the propaganda campaign built around it, we got a massive movement of women into the workforce. Did those women get fulfillment, as Friedan promised? Maybe a few did, but most of them got BS jobs of various types, and we all got a massive increase in consumerism, which we are told is wonderful, because “look how much GDP has increased as a result of women entering the workforce!”
Of course, even this “fact” is a lie, because GDP excludes work inside the home. If two women raise their children, their work is excluded from GDP, but if each is paid by the other to raise the other’s children, GDP expands. But then GDP is largely a fake statistic and much of our economy a fake economy; and anyway it is simply false that any expansion in GDP is a social good, especially when the resulting costs, in the form of mass social destruction, are treated as disconnected, mere happening coincident in time but unrelated.
Regardless, with the assistance of the government and free-market enthusiasts eager to enrich a rotten ruling class, now a two-income family is required for what is regarded as a decent lifestyle, or even just to make modest ends meet, and this was independently a goal of too many in our society.
Better yet for our neoliberal overlords is a one-income family consisting of a permanently single woman. If you want to shudder, read a completely insane CNN article from 2019, titled “There are more single working women than ever, and that’s changing the US economy.”
The point is that single women spend an ever-greater proportion of the money spent on consumer goods, so we must further this trend, in particular by ensuring that those such women foolish enough to have children are given a place to park their children while they work to get money for the consumer goods that should be the real focus of their lives. There is more and more advertising, if you pay attention, to single women of luxury goods that in the past would be bought as gifts for those women – who now have nobody in their lives who will buy them any gifts at all, and must purchase artificial joy. It is enough to make one cry, if one wasn’t already fully occupied in flogging the cretins who brought us to this stupid pass.
So, enough abuse of the stupid. What should the social roles of women and men be in a well-run society? As you can doubtless tell, we are working our way to a call to limit women working outside the home. Let’s start by asking what women want. We are often lectured today, by the commissars of the loathsome ideology of “diversity and inclusion,” that fifty percent of all jobs should be held by women (or at least desirable jobs – men will keep all the dangerous and dirty jobs).
The usual response of “conservatives” is to point out that, empirically, most women simply don’t want the same jobs as men, so in a world of perfect choice far fewer than fifty percent of most jobs would be held by women. This fact is on actual display in countries that are most egalitarian about sex-role choice, notably the Scandinavian countries, where women choose traditional roles at very high rates. The timid “conservative” naturally begins, as demanded by the Left, with a preemptive apology. “Of course, I think women should be allowed to choose the path they want.”
Wrong. I don’t think women should be allowed to freely choose the path they want (nor should men). They should make the choice for family. To that end, society should largely nullify choosing career over family as an option, and coerce women into certain occupations and modes of life – and should in like manner coerce men, among other things to lead a life of being the sole provider for a family (unmarried men, beyond say, thirty, and men who fail to provide, should also be socially penalized)
In other words, society should reflect the natural division of the sexes, regardless of whether some people in society would prefer to make some other choice, whether because of their outrider nature, excessive focus on self, or because of ideology. We should return to social compulsion, shame and ostracism, to achieve this, as well as major changes to tax and legal structures, such as by absolutely barring no-fault divorce and offering (like the government of Hungary) massive payments to married couples with multiple children.
I’ll end with more thoughts on specific structural changes, but to expand on this positive vision, let’s begin with the end in mind. How should society recognize and beneficially implement the telos of both men and women? Therefore, let’s talk about astronauts. That is, let’s discuss Space, the first pillar of Foundationalism’s twelve pillars, and the role of women in Space.
The overriding principle of Foundationalism is reality, and restoring a realistic understanding of the roles of men and society is another pillar of Foundationalism. The crucial fact about men and women in society is that they are, and must be, partners. That women cannot do everything that men can do, and men cannot do everything women can do, and that even when each can do what the other can do, usually cannot do it as well, does not make one sex subordinate. But without recognizing and honoring this basic fact of different competencies, no society can operate for long.
Astronauts show how this works in practice. What is the purpose of astronauts? This is really one question in two parts. First, what is the purpose of astronauts in the present day, when astronauts are limited to short trips to, and short stays in, near-earth orbit? At most, perhaps, astronauts might visit Mars in the relatively near term, if Elon Musk has his way, although I’ll believe it when I see it. And second, what is the purpose of astronauts if humanity were to expand permanently, as often depicted in science fiction, such that astronauts are not just travelers, but off-earth inhabitants, the conquerors of a new frontier?
There are quite a few female astronauts today. If sex were ignored, would there be as many? Of course not. Far more men than women have the characteristics that make one want to be an astronaut, and make one a good astronaut. All our children are collectively assaulted from their earliest youth with massive propaganda pushing the idea of female astronauts.
Try something – go to any museum exhibit related to Space, and count the number of female astronauts depicted. It’ll be around eighty percent of the total, always with hagiographic sub-exhibits about specific women astronauts who accomplished nothing at all. Women who express any interest in being an astronaut are giving an unmerited boost at every stage, beginning in kindergarten, and when the time comes to choose astronauts, are placed at the front of the line. I doubt if astronaut selection were sex-blind there would ever have been a single female astronaut.
The purpose of astronauts today is to increase our knowledge and make possible future expansion outside the confines of Earth, what I think is a very important part of our society’s work. What are the costs and benefits of distorting the reality of female astronauts? Among other costs, choosing inferior candidates must mean, on average, not only that inferior work is done. It also means that the pool of outstanding candidates diminishes, because there is a strong incentive for the most talented and driven, and thus the most prideful, all men, to walk away in disgust from a rigged system.
A society that does not seek out and reward its best is a doomed society, and this is just one example of our such habits tied to sex roles. There are other costs to coddling female astronauts, of course – many of them very similar to the costs of allowing women in the military. What are the benefits? None, really, but I suppose the argument is that some women feel better about themselves, in the same way a child praised for crude finger painting by his parents feels better about himself. That is, unjustifiably, but in this case, knowing the praise is unjustified, and thus made simultaneously humiliated, and aggressively on the lookout for anyone adding to the humiliation by pointing out the obvious.
As to permanent human expansion, an excellent depiction of this is the books and television series The Expanse. Well, it’s excellent, except for its depiction of women, which is insane. In fact, there are no women at all in The Expanse. There are many men, each of whom acts like a stereotypical high-testosterone man, who are given female names and female physical characteristics, but none of them bears any resemblance to actual women (except for one, a Margaret Thatcher type, real but extremely rare).
In real life, if our society were to expand into the solar frontier, no “female” character in the show would occupy any position she occupies in the show – even if there were no social barriers to occupying that position. Real women as characters are totally and completely absent. Children almost never appear, and never under the care of any female character (except the lesbian “wife” of one character, who abandoned her “family”). All this is extremely jarring, making the show difficult to watch, except if you are deluding yourself, or have given it no thought at all. Yet, sixty years after The Feminine Mystique, this lying propaganda is not only ubiquitous, but ever more aggressive – probably because our ruling classes feel their hold on the greased pig of reality slipping away.
If we really got the frontier world of The Expanse, as far as sex roles, it would be like Little House on the Prairie with fusion drives and rail guns. Not only would no woman fight, and spaceships crewed only by men, both military and commercial, be the absolute rule, but women would have large families, over which they, embedded in a larger web of families and women, would exercise most of the responsibilities.
The simple reality is that men, far more than women, are interested in what’s involved in conquering Space, or conquering anything: fighting, risk-taking, adventure and glory, as well as dangerous and physically demanding jobs. Men and women would partner to achieve the near impossible tasks required to push mankind forward, but men would do the pushing and take the risks, in large part to protect the women. Such natural partnership is demanded by any harsh environment – it is only in our current softness that we can pretend otherwise. When reality is busy asserting itself in the form of hard vacuum silently waiting to kill you and your children, nobody will pretend that women and men are interchangeable.
Sadly, we must return to today, and hope our future in Space will work itself out, or that we can work our future out to make that possible. What did women, and all of us, get when women were pressured for decades to work outside the home? Let’s see – the women got BS jobs, often make-work funded by government dollars or the expansion of worthless work such as human resources, or innumerable other forms of paper pushing (many the result of pointless and destructive government regulation of one sort or another).
Friedan promises that women who listen to her siren call will be “mastering the secrets of the atoms or the stars, composing symphonies, [or] pioneering a new concept in government or society.” A wave of bitter laughter from millions of women can be heard, women who discovered too late that those type of jobs were not on offer, and they gave up children and a decent family life for a delusion. It’s not just women, though – only a tiny segment of men has a job that offers real accomplishment, “the life of mind and spirit,” either.
The job does not give them fulfillment; it is a means to their real method of fulfillment, providing for and protecting their family. And two careers maximizes success for neither spouse; meaning that men, who in their nature do get meaning much more than women from their success in the outside world, are more damaged by the demand for two careers – not collateral damage, but intended damage in the Left’s age-old war on the family. The result, when the natural order of sex roles is upset, is that nobody benefits, and society circles the drain.
I keep banging on about the differences between men and women, as if they were self-evident. They are, of course, and that used to be a commonplace, but dispelling the fog of self-induced unknowing is, I suppose, necessary. There are many differences between the sexes, and I have discussed them before in other, but related, contexts, such as the insanity of allowing women into the military.
As regards the question of work within and outside the home, the key facts are as follows. First, women are far better suited to, and far more interested in, raising children than men, and the point of the family is children – a family consisting of a childless couple has a great sadness at its core (yes, I know we’re not supposed to say that out loud).
Second, men seek glory, power, and dominance. Women simply don’t. (Offering exceptions to this general rule does not prove anything; it is equivalent to pointing to hermaphrodites to argue against the unalterable truth that mankind is divided universally into male and female). True, few jobs offer the chance for glory – but providing and protecting largely satisfy, for most men, this urgent drive.
Women therefore don’t choose to do what it takes to have a successful career, meaning achievement in a hierarchy earned through competition. The vast majority of women lack the drives necessary. They may in fact be smarter, better organized, and have other traits associated with career success. But their essential drives are directed toward family.
By studying societies of the past, we can see how a non-ideological society organically develops. In Western countries, the usual structure for well over a thousand years has been a partnership between men and women, where each is supreme in one sphere of family life, contained in a larger family web, but consults the other. Women do hold up half the sky – it’s just that their role, in its nature, is inward-facing, and men’s is outward-facing.
In the West, there has never been any equivalent of the “eastern” approach, typified by purdah, the separation and seclusion of women (driven by defective religious or cultural imperatives that, just as Friedan did, mar the natural order of a society).
Muslims during the Crusades were famously scandalized by how the men of the Franks allowed their women not only to appear in public, but to scold them and order them about. To take a more recent example, one cannot do better than Matthew B. Crawford’s talk in Why We Drive about women and men in Appalachian motocross racing, where, on and off the track, men and women act in (sometimes coarse) partnership, together striving towards excellence (something Crawford heretically contrasts with the sickening inversions he sees in Portland).
As with any human society, within this broad truth, there have been many local variations. Even Friedan admits that until near her present day, American women were not oppressed or unhappy. (Friedan does not make the flatly untrue claims about historical “patriarchy” that are the norm now, such that “everybody knows” that The Handmaid’s Tale is both history and future. She doesn’t because everyone would have laughed at the obvious untruth and pitched her book into the trash; it is only now, after sixty years of propaganda, that we believe there ever was a patriarchy). “Until, and even into, the last century, strong, capable women were needed to pioneer our new land; with their husbands, they ran the farms and plantations and Western homesteads.” (She should be cancelled for mentioning plantation).
Friedan doesn’t make the obvious conclusion – that if the subset of women on whom she is focusing are alienated by their circumstances, returning to the thicker social web even Friedan praises, not destroying the family, is the answer. But then, after all, destroying the family in the pursuit of emancipation from all unchosen bonds was her real end, not offering fulfilment within families to women.
This does not exclude women from ever working outside the home. Quite the contrary, actually. In the past, young women often worked. When rural life was the norm, women and men both worked, but neither could be said to have a career – this was division of labor, rather. As city life became the norm, young women often worked, until they found a husband. Often this was in work at which they excelled and tied to female talents and preferences, such as teaching and nursing.
Higher-status women, like Friedan, went to college and found a husband there (something Friedan, famously masculine and no doubt finding it hard to find a husband, bitterly complains about). Women whose children had left the home might work as well, or women with children might work-part time upon necessity. There is nothing inherently societally destructive of this. What is destructive is where the woman prioritizes that work over family, demanding it become a career – that is, a main focus of her life, and the driver of her happiness, or more likely, the lack of it.
What of a woman who does not get married, not purely by choice? That is, some women, because of their personality or physical appearance, find it difficult or impossible to marry. Or maybe failure to marry is some combination of bad luck and bad management; past a certain age, as everyone knows, a woman’s ability to get married drops precipitously (hence wine aunts). Usually, in our modern atomized society, such women have no choice but to substitute career for family – in the past, they would be woven into the structure of an extended family.
Until we can return to that latter, career is really their only option – like my own recently-deceased aunt, who chose a career in virology, after getting an M.D. from Harvard, and with whom I was close. She loved children, but never married (though she could have – she was indoctrinated into “career first”), and as a result was desperately lonely and unhappy for decades. I blame Friedan (and my aunt’s mother, my grandmother, who pushed anti-family ideology years before this book was published).
I have to admit, though, that had you had asked me twenty years ago, I would have largely bought into the myth that women having a career, and being treated as the equivalent of men in pursuit of that career, was a sound social choice. My wife and I met as big-firm M&A lawyers in Chicago; we presumed, early on, that we’d both end up with legal careers at large firms, with a nanny for our children.
We were conditioned to believe that any other system is monstrous, and that women lawyers should be viewed the same as male lawyers, even though everyone knew that women lawyers dropped out of law firms at vastly greater rates than men, either after they had a child or simply because the aggressive, high-pressure, competitive hierarchy of a large law firm is not congenial to the nature of women in general. (That it is congenial to some is irrelevant; one can always find exceptions to most general rules, and social structures are built on general rules, not exceptions).
My wife soon realized that wasn’t for her, though, and quit her law firm job some time before I quit mine to become an entrepreneur. But what followed has been an organic partnership. I was the public face of our company, but it would have been a failure without her guidance, encouragement, and support, since she balanced, among other defects, my disagreeable tendencies and limited ability to judge character (although, contrary to questions I get sometimes, I am not in the least autistic).
On the other hand, along the way we formed a spin-off company for which I suggested, or insisted, she be CEO, and that was a grievous mistake, only corrected after some years. But it all worked out great for us. For many of our friends, who refused to change course as we did, it has not worked out so well at all.
It is true that if women are discouraged from working outside the home, there will be some price to pay. Nothing is free. First, some women will be less happy than if they had careers – few perhaps, but not zero. Second, to the extent women working outside the home are producing real value, actual economic output will dip, and people will be able to afford fewer goods and services.
This may or may not be a problem; the reason most two-parent families must have both parents work is to make ends meet, because unbridled capitalism has allowed employers to squeeze “efficiencies” out on the backs of the workers, in order to enrich executives and stockholders, and claim these steps are necessary (expertly covered by James Bloodworth in Hired). Yes, it’s also social expectations on the consumer side; if you “need” a large house, frequent new cars, and a $1,400 phone, you need more income. Changing this terrible system to make it the norm that one income adequately supports a family, by limiting the “free market,” will be essential.
Third, you will give up those relatively rare occasions when a woman working outside the home makes, through her employment, a significant contribution to advancing society. I don’t mean, say, women working as scientists at pharmaceutical companies – any discoveries made by them would also be made by men, and probably sooner and better, given the real differences in men’s and women’s capabilities and drives, and the destructive advantages bestowed on women in any male-dominated profession. I mean exceptional production.
True, the bumper sticker phrase, “Well-behaved women rarely make history” is only fully accurate if you delete the “Well-behaved.” As I say, men drive a society forward, while women bind a society together; and this necessarily means that all, or nearly all, spectacular achievements will be those of men. But this is still a potential cost.
What structural/legal changes should be made, other than the social compulsion mentioned earlier? No, not ticky-tack programs such as new family leave policies, which anyway just encourage women to work outside the home. Rather, government policies, tax and otherwise, should massively favor single-income married families where the man works.
Employment discrimination (and all other types of discrimination) on the basis of sex, and marital status, should not only be completely legal, but socially encouraged, even demanded. Not only is sex discrimination, like age discrimination, almost always entirely rational, such discrimination is affirmatively necessary to accomplish the desirable society.
Again, no-fault divorce should be banned, and modern technology that erodes healthy relationships between men and women, from Tinder to online pornography, should be rigorously suppressed. No doubt other matters will deserve similar attention, and a new propaganda campaign, especially in popular entertainment, to reverse sixty years of indoctrination will also be needed. Let’s get started!
Life being what it is, some women will always choose to work outside the home. Sometimes this is in their particular nature; sometimes they actually need the money. This should not be made illegal, but there should be a substantial social penalty for women who make work a career.
In the same way as for decades women who choose not to have a career have been held in contempt, viciously portrayed across all popular media and vilified by our ruling classes, a married woman who chooses to have a career should be looked down upon, especially if she has children, and most of all if she chooses not to have children. (One can multiply special cases – what if a woman cannot have children? Hard cases make bad law, and bad social policy; the median case is what matters). And a “career woman” should presumptively be discriminated against in favor of a man competing in the same career path, and most of all in favor of men with children.
It is doubtless true that we cannot turn a switch. If all women in the workforce today left the workforce tomorrow, much disruption would result. A lot of it, that tied to BS jobs, would be temporary. But in some jobs, such as family-practice physicians, where women are the majority, rebalancing jobs could only be done over time. And some jobs, such as elementary-school teaching and nursing, will always have women in the majority, since those jobs always appeal more to women, and it is possible to enter and leave those jobs as a woman’s life changes – most of all, before, and perhaps after, a woman marries and has children. The exact result will derive organically from general rules, not from an artificial ideology.
The goal, across all of society, is to return to a natural partnership between men and women. This is very much not a siloed partnership, where the man and woman each operate completely separately in pursuit of a unified goal. Instead, there is necessarily overlap – a woman advises her husband in his role outside the home, and the husband assists his wife in her roles inside the home, in particular with children, especially with boys as they come of age, but also simple relief of the drudgery that characterizes much household work. But human nature dictates that those spheres and roles be different, and only by a return to this can human flourishing be reborn, relegating this book to history as an unfortunate footnote.
Charles is a business owner and operator, in manufacturing, and a recovering big firm M&A lawyer. He runs the blog, The Worthy House.
The image shows, “Dans le bleu (Into the Blue)” by Amélie Beaury-Saurel, painted in 1894.
Now that the disgraceful year 2020 is finally gone, with its endless stream of deaths and grievance, we can properly look at what it brought us (Covid, lockdown, unemployment, etc.) and also what it stole from us. I firmly believe that like health and food, culture too is an essential nourishment for our lives, and any time we are deprived of it, we feel miserable and sick.
In Italy two great cultural events had been set for 2020 that were either cancelled or went unnoticed: the 500th anniversary of the death of Raphael Sanzio, and the introduction of Dante Day (Dante Dì), the official day to celebrate the immortal creator of the Divine Comedy. Yes, you read that right – until last year, in Italy, there was no official day to celebrate Italy’s greatest poet, and arguably one the greatest poet of all time.
The official dates were the 6th of April to celebrate the anniversary of Raphael, and the 25th of March to remember Dante. Now, these dates were very interesting because both men, by a surprising coincidence, have a connection to Good Friday. According to Giorgio Vasari’s Lives, Raphael was born on the night of Good Friday March 28, 1483 and died on Good Friday April 6, 1520. What an amazing coincidence for a man whose family name was Santi (Saints), which was then latinized into Sancti, and from this to the current Sanzio. And of course, almost all scholars agree that Dante began his fictitious travel into the Three Realms on Good Friday March 25 of the jubilee year 1300. Luckily enough, the official Italian committee discarded the death date of the bard on September 14 because it did not fit properly into the school time calendar. Sometimes obtuse bureaucracy helps!
As for the Raphael celebration, the best painting exhibition ever, collecting the greatest works of the master from museums all over the world, was organized in Rome at the Quirinal Palace. But, alas, it opened just few days before the first Covid outburst and was then sadly shut down a couple of weeks later in the midst of the first terrible stint of the pandemic in Italy. There will not be a second chance for this gorgeous Raphael show.
For Dante Day plenty of cultural events were planned involving scholars, school students, TV actors and ordinary citizens. Readings from the Divine Comedy should have taken place in the most iconic Italian piazze, where schools were invited to feature exhibitions on Dante, and TV was expected to provide huge coverage of the widespread festivities.
All these events were simply obliterated by the surging of the pandemic. In the only event downsized permitted, single citizens were invited to recite, from their windows or balconies, a few tercets of “Paolo and Francesca,” all together at 6:00 PM on March 25. I did that, and posted the recording to social media – and found out that the anniversary was not that popular among my connections. Never mind, next year marks the 700th anniversary of the death of Dante and luckily enough Covid 19 will give us a break by September 14.
But it is not only the sheer coincidence of calendar dates that links these two undisputed geniuses – men of different centuries, genuine children of their time and culture, with very different characters. But both also contributed immensely to elevate our poor humanity towards the perception and appreciation of divinity.
Raphael is unanimously considered the peak of Renaissance painting, capable of shifting Leonardo’s sfumato technique into astonishingly natural and beautiful reality. If ever the Italian Renaissance has meant grace, beauty, harmony, naturality, Raphael is the true and complete achievement of it. Just imagine, from the moment he died in 1520 until the Impressionist revolution in the mid-19th century, his work was the inspiration and touchstone for all painting academies in all countries of the western world. His cycle of Madonna and child Jesus simply set forever the iconographic standard for this holy representation, and you can find a copy of one of them in almost any Italian Christian home. But Raphael is also the creator of the Stanze di Raffaello (the Raphael Rooms), where he mastered his refined art into a theological and compositional complexity that attained unequalled heights in the history of art.
Raphael was a good Christian, and this must not be taken for grant, even in the pope-ruled Rome of the early 16th century. The story goes that on Good Friday 1520, sensing his end, Raphael asked that his last masterpiece, The Transfiguration, be brought into his room and hung on the wall in front of him. There is no doubt about the reason – looking at the beautiful radiant Christ, he was already savouring the glory of his encounter with Him. When you survey the entirety of western Christian figurative production, it is hard to find as glorious and serene an image of the defeat of death, of which The Transfiguration is both a pledge and promise.
I do not know Raphael’s biography so well as to appraise the depth of his religious feeling and belief. However, it is unquestionable that the Holy Spirit guided his hand and heart in the short span of his life.
Unveiling the presence of divinity in Dante is a much easier job, starting from the very title of his masterpiece The Comedy soon after labelled as Divine by his great contemporary, Giovanni Boccaccio, partially because of the theme of the composition but mostly for the unrivalled poetic heights the work accomplished. Some passages warmed our youthful reading (“Paolo and Francesca,” “the voyage of Ulysses,” “Count Ugolino,” and the “Hymn to the Virgin”), others led and transformed our mature-years through a more Christian and mediated reading of Purgatory and Paradise canticles.
And we really do not care if, in praising the institution of Dante Day, the complete host of Italian intelligentsia saluted “The Father of the Italian Language,” “The very first Italian,” and “The founder of European identity.” For us he will be forever the poet who amazingly translated the truths of our faith into exultations of the heart and tears of love: “l’amor che muove il sole e l’altre stelle.” The work of Dante is always so divinely inspired and filled with poetical miracles that he deserves to stay on the calendar regardless of a questionable civil beatification. In this terrible pandemic times we all, we believers first, should start back from where he commenced his journey: “Miserere di me”.
God is a loving Father and an excellent Teacher; He can use many different ways to show us the path to paradise. Among them all, the human longing for beauty sublimely initiates our earthly journey to the glory of celestial infinity. Bless Him for spreading our road with so many friendly and inspiring companions!
Maurizio Mandelli is a businessman by trade and enthusiastic amateur scholar of local history and the arts. He has published two books (War of the Spanish Succession in Lombardy and The Italian Campaign of Napoleon III). He is a regular contributor to local magazines on religion, ethics, society, history and the arts.
The image shows, “The Transfiguration,” by Raphael, painted ca. 1518-1520.
Today’s skeptics, who seem to reject something traditional just because it’s traditional, cannot sit still during the holy season of Christmas without mocking the notion that Christ would have been born on December 25th. If it were just the unbelievers who engaged in this mockery, it would be expected, since unbelievers, by their very nature, are not expected to believe. More troubling is the fact that, like evolution and all other modern atheistic fantasies, this one has seeped through the all-too narrow wall separating Catholics from the rest of the world. The anti-Christmas myth, which makes a myth out of Christmas, is being foisted on Catholic children as fact. To benefit these, and any Christian who respects piety, history, Scripture, and Tradition, we present our defense of Christmas.
Since there is no date for the Nativity recorded in Holy Scripture, we rely on the testimony of the Church Fathers and of history to get an answer to the question, “When did Christmas take place?”
First, let us see the essential significance of the Savior’s birth at the time usually attributed to it. The winter solstice, the astronomical event which recurs every year, is traditionally said to be the birthday of the Messias. To elucidate the meaning of this fact, we will turn to Saint Gregory of Nyssa (+ 385 or 386): “On this day, which the Lord hath made, darkness decreases, light increases, and night is driven back again. No, brethren, it is not by chance, nor by any created will, that this natural change begins on the day when He shows Himself in the brightness of His coming, which is the spiritual Life of the world. It is Nature revealing, under this symbol, a secret to them whose eye is quick enough to see it; to them, I mean, who are able to appreciate this circumstance, of our Savior’s coming. Nature seems to me to say: “Know, oh man! that under the things which I show thee, mysteries lie concealed. Hast thou not seen the night, that had grown so long, suddenly checked? Learn hence, that the black night of Sin, which had reached its height, by the accumulation of every guilty device, is this day, stopped in its course. Yes, from this day forward, its duration shall be shortened until at length there shall be naught but Light. Look, I pray thee, on the Sun; and see how his rays are stronger and his position higher in the heavens: Learn from that how the other Light, the Light of the Gospel, is now shedding itself over the whole earth.” (Homily On the Nativity)
Saint Augustine, a Western Father, concurs with Gregory, the Easterner: “Let us, my brethren, rejoice, this day is sacred, not because of the visible sun, but because of the Birth of Him Who is the invisible Creator of the sun. He chose this day whereon to be born, as He chose the Mother of whom to be born, and He made both the day and the Mother. The day He chose was that on which the light begins to increase, and it typifies the work of Christ, who renews our interior man day by day. For the eternal Creator, having willed to be born in time, His birthday would necessarily be in harmony with the rest of creation.” (Sermon On the Nativity of Our Lord iii) Similar sentiments are echoed by St. Ambrose, St. Leo, St. Maximus of Turin, and St. Cyprian.
To further the beauty of this mysterious agreement between grace and nature, Catholic commentators have shown this to be a marvellous fulfilment of the utterance of St. John the Baptist, the Voice who heralded the Word: “He must increase, but I must decrease.” Literally fulfilled by the ending of the Precursor’s mission and the beginning of the Savior’s, this passage had its spiritual fulfillment in the celebration of John’s feast on the 24th of June, three days after the summer solstice. As St. Augustine put it: “John came into this world at the season of the year when the length of the day decreases; Jesus was born in the season when the length of the day increases.” (Sermon In Natali Domini xi). adoration-of-the-shepherds-el-greco
Lest anyone find all this Astronomy to reek of paganism, we remind him that in Genesis, it is recorded: “And God said: Let there be lights made in the firmament of heaven, to divide the day and the night, and let them be for signs, and for seasons, and for days and years: To shine in the firmament of heaven, and to give light upon the earth. ” Further, the Magi, those holy men from the East, who came to greet the Expectation of the Nations, were led thence by a star.
“But,” you may say, “the winter solstice is on the 21st of December, not the 25th.” Correct. But if, from the time of the Council of Nicea (325) to that of Gregory XIII’s reform of the calendar (1582), there was a 10 day discrepancy between the calendar and the actual astronomical pattern governing it, then it is entirely possible that a four-day discrepancy had occurred between our Lord’s birth and the Council. We illustrate this possibility as follows: The calendar that many of the Greek schismatics still follow (the Julian calendar), is presently fourteen days off from the Gregorian. This additional four day discrepancy from Gregory’s time has happened over about 400 years.
But now for the meat of the issue: when did it happen? According to St. John Chrysostom, the foundation for the Nativity occurring on the 25th of December is a strong one. In a Christmas Sermon, he shows that the Western Chruches had, from the very commencement of Christianity, kept the Feast on that day. This fact bears great weight to the Doctor, who adds that the Romans, having full access to the census taken by Augustus Caesar (Luke 2, 1) — which was in the public archives of the city of Rome — were well versed in their history on this point. A second argument he adduces thusly: The priest Zachary offered incense in the month of Tisri, the seventh of the Hebrew calendar, corresponding with the end of our September or the beginning of our October. (This he most likely knew from details of the temple rites which were transmitted to him by a living tradition, supported by Holy Scripture.) At that same time, St. Luke tells us that Elizabeth conceived John the Baptist. Since, according to the Bible, Our Blessed Lady conceived in the sixth month of Elizabeth’s pregnancy (the end of March: when we celebrate the Feast of the Incarnation), then she gave birth nine months later: the end of December.
Having no reason to doubt the great Chrysostom, or any of the other Fathers mentioned; in fact, seeing objections issued only by heretics and cynics, we agree with the learned Doctor and conclude that, by God’s Providence, His Church has correctly commemorated the Feast of His Nativity.
Further, as the continuity of the Old Testament with the New Testament was preserved in two of the principal feasts of the New: Easter corresponding to the Pasch and Pentecost to Pentecost (same name in both dispensations), it would have been unlikely for the Birth of the Eternal God into our world not to have had a corresponding feast in the Old Testament. Until the time of the Machabees, when the temple was re-dedicated after its desecration by the Greek Antiochus IV, Antiochus Epiphanes (see 1 Machabees 4). One hundred and sixty-seven years before Jesus, the commemoration was instituted according to what was written: “And Judas, and his brethren, and all the church of Israel decreed, that the day of the dedication of the altar should be kept in its season from year to year for eight days, from the five and twentieth day of the month of Casleu, with joy and gladness.” (I Macc. 4, 59) To this day, Jews celebrate the twenty-fifth of Casleu (or Kislev, as they say) as the first night of Hannukah. This year (5757 in the Jewish calendar), 25 Casleu was on December 12. Even though the two calendars are not in sync, Christmas and Hannukah are always in close vicinity. With the Festival of Lights instituted less than two centuries before Our Lord’s advent, the Old Testament calendar joined nature in welcoming the Light of the world on his birthday.
As for the objection, “Jesus couldn’t have been born in the winter, since the shepherds were watching their flocks, which they couldn’t have done in winter”: This is really no objection. Palestine has a very mild climate, and December 25 is early enough in winter for the flocks and the shepherds to be out. The superior of our monastery, Brother Francis Maluf, grew up 30 miles from Beirut, which has the same climate as Bethlehem, both being near the Mediterranean coast, and he has personally testified to this fact.
Jeremy Black has an international reputation for his prolific writings on the past, present and future of politics, diplomacy, warfare, strategy, empire, historiography, cartography, the press, and even the historical context of James Bond. Like other scholars in such fields, he has his roots in diplomatic history, in his case that of the early eighteenth century.
This book exhibits all his trademark qualities as a writer and historian: accessibility, wide and recondite learning, global approaches, long chronological spans, lateral thinking, striking observations, and confident exposition.
Often provocative, he is unfailingly interesting, with the ability to see familiar issues in new ways, and to leave the reader with food for thought on important subjects. This book provides an analytical survey of naval warfare from the ironclad era of the 1860s, through the two World Wars (including substantial chapters on the inter-war period) and the Cold War, to the current era and into the future. As he says at the outset, the focus is on the interplay of technological, geopolitical, and resource (for which read fiscal and economic) issues.
This approach enables avoidance of both narrowly technical force structure issues and of a too broad-brush treatment which pays them insufficient attention. In adopting an explicitly global and comparative framework, Black shows his understanding of military forces in general and navies in particular: the critical nature of their relative, rather than absolute, power and their relevance beyond the purely local or regional maritime domains.
While indicating how naval power was a key to imperial acquisitions and maintenance in the late 19th century, Black points out its limitations in both protracted and truncated continental conflicts. The American Civil War saw the effective initiation of littoral anti-access strategy with coastal-defense monitors and fortifications armed with 15-inch guns. Along with the perennial strategic problem of Canada (the British Empire’s indefensible landward frontier), this precluded British intervention on the side of the Confederacy. The Franco-Prussian War, swiftly decided on land, gave little opportunity for a French blockade to work and underlined for France its age-old need to prioritize the army over the navy and the continental over the maritime front.
Black gives attention to developments and conflicts usually ignored in the Anglosphere, such as the attempt of Korea to modernize naval power and the War of the Pacific 1879-1883 which saw Chilean victory over Bolivia and Peru. While surveying naval technological and doctrinal changes after 1880, Black places them in context by observing how the First World War would have been strategically recognizable to British admirals of past eras – a continental conflict in which Britain was successful by means of economic warfare, trade protection, alliance diplomacy, and expeditionary forces.
Unlike in 1870, the German failure to knock out France at the outset meant that the war evolved into a long and complex confrontation, largely between continental and maritime power, in which the latter was victorious. Time, as usual, worked in favor of maritime power which then excelled in creating strategic options. Railways, as Black points out, could mobilize the resources of a continent, but ships could deploy those of the world. The great irony was that sea power prior to the war had, however, over-promised and was seen as under delivering.
Black draws parallels between the inter-war period and the current era, while distinguishing between the rise of Japan, an insular state, as a naval power in the early 20th century and China today, a continental state whose maritime ambitions have been enabled by the absence of a landward threat since the end of the Cold War. Interestingly, when considering the force structure issues of the 1930s, Black argues that rumors of the death of the battleship were greatly exaggerated and that its expected vulnerability to air attack was not fully borne out in the 1940s. Like other surface warships, it had enduring value, including as a means of naval night fighting.
The longest chapter deals with the Second World War which was marked, as Black points out, by the ‘world ocean’ becoming a unified theatre involving every type of naval conflict. He therefore resists the temptation to divide the analytical narrative into European and Pacific stories but treats them as an integrated whole. This enables consideration of British grand strategy in having to tackle an ultimately impossible task of war with three enemies in two hemispheres (the strategic nightmare of the British Empire triggered by the rise of a hostile extra-European naval power in Japan – a problem which had never arisen during the age of sail).
The German blitzkrieg of 1940 repeated the outcome of 1870 with the rapid defeat of France in a land war. But the pivotal difference was British naval-maritime power which enabled prolonging of the conflict. Black argues correctly that Britain was not in real danger of invasion, given the strength of the Royal Navy and the inadequacy of German amphibious capability. The critical struggle was of course in the Atlantic, where air cover against U-boats had to be progressively developed: a need insufficiently anticipated by both the RAF and the RN.
Striking at Pearl Harbor, Black argues, was not in Japan’s interest when the Pacific naval balance of forces was in the IJN’s favor in terms of a campaign in South East Asia. He also explores the interesting counter-factual of what he sees as the lost Japanese opportunity to project greater naval power in the Indian Ocean, threatening the British position in India, rather than fighting wholesale against the US Navy in the Pacific.
Black divides his treatment of the Cold War into two chapters dealing with the period of American dominance up to the late 1960s and that of the stepped-up Soviet challenge which followed. He points out the importance of naval power even in a period of confrontation rather than conflict, including its role in globalising great power rivalry and conducting regional wars as in Korea and Vietnam.
The US alliance system, in both the Atlantic and Pacific areas, was very largely a construction of sea power, and created issues of planning, procurement, and interoperability greater than those which had faced Britain and the Dominions in an earlier era. The Falklands War, as a successful episode in sustained maritime warfare in the missile age, encouraged both US political commitment to a maritime strategy and Chinese interest in naval power projection.
Black’s discussion of the post-Cold War period recognizes the complexities of naval power in an era of state and non-state actors, network-centric and asymmetrical capabilities, new awareness of the operational level of warfare at sea, and the high financial cost of cutting-edge naval technology. He is good on the politics of US naval power in a democratic age, on force structure issues for the major and medium powers, and on the psychology of Chinese maritime strategy.
That navies do not carry the baggage of association with the interventionist wars following 9/11 is a political asset for them, especially in a period of growing maritime rivalry. Black argues that despite the political charisma of air power, and given the largely unfulfilled promise of trans-Eurasian land transport, navies are of enduring relevance when population growth and economic activity are concentrated in littoral cities, and when global maritime trade continues to increase in capacity and cost-efficiency.
While appreciating how warships are increasingly threatened by land-based weaponry, he recognizes the sheer strategic value of naval surface forces. Perhaps the fundamental choice facing advanced navies today is between fewer more costly and highly capable units and more which are less capable but less costly and more readily risked. One should add that the latter category probably involves more lives at stake.
If there is a criticism to be made of the book, it is that somewhat more emphasis could have been placed on the human element of naval capability. But there are interesting observations, for example on the superiority of US naval leadership education between the Wars which fostered the higher operational and strategic problem-solving skills evident in the defeat of Japan.
The book takes advantage of the explosion of modern naval historical writing over the last generation, and as usual Black’s footnotes are worth a read. Australian developments, from the creation of the RAN to the Defence White Paper of 2016, are given attention and placed in global context.
While each reader will find opinions with which to differ, the book has something informed, perceptive, and sensible to say about virtually everything within its scope, and is highly recommended for its rich and intriguing detail as well as thematic imagination.
Dr. John Reeve is Honorary Senior Lecturer in History at University of New South Wales Canberra, Australia. This review originally appeared in the Australian Naval Institute.
The image shows, “The Battle of Jutland,” by Montague Dawson, painted ca. 1949.
Celebrated with great pomp at the end of the last century, the bicentenary of the French Revolution (1789-1989) did not fail to rekindle debates and controversies over the interpretation of the event. Many French intellectuals and academics were still dreaming of the “blessed” era when Clemenceau invited them to take the “Revolution as a bloc.” After all, it was justifiable or excusable for them to pass over in silence the Terror, the Vendée “genocide,” the terrible treatments inflicted by the Republic and its leaders on “monsters,” “sub-humans,” the “execrable race,” that it was appropriate to “exterminate” or “purge” the nation.
Under blows from foreign authors, in particular English-speaking ones, it had to be admitted that the “heroes” of the revolutionary gesture could not escape historical research. The corruption of Danton, the intrigues of Mirabeau, the paranoid delirium of Robespierre, the fanaticism of Saint-Just, the violence of Marat, the deceit of Hébert, the villainy of Barras were very troublesome. These men hardly corresponded to the idealized image that Republican education (primary and secondary school) had long given of this period to legitimize the foundations of a regime that had become uncertain. How many already seemed old and outdated, such as, the Robespierrolatry of a Laponneraye (1842), or the hagiographies of Danton by Quinet (1865), of Saint-Just by Hamel (1859) or of Hébert by Tridon (1864). But on the whole, the myth of the Revolution as a veritable monolithic bloc still held firm. No one imagined the magnitude of the earthquake that a group of researchers and academic historians would cause in the late 1980s.
For nearly two centuries, the theories of interpretation of the Revolution have opposed each other and clashed. But justification, advocacy, and respect for the vulgate remained the rule of research and higher education – a strict, imperative prescription that no reasonable researcher could break without risking his career.
Diverse and contradictory, the theses and interpretative theories of the French Revolution can be grouped into three categories. Of course, the historiography of the subject cannot be reduced to these three antagonistic schools, but this classification at least has the merit of clarity and convenience.
The first school of thought sees the Revolution as a mythical phenomenon, as a revelation of absolute values pushed onto the stage of history under the pressure of Justice, Liberty and the People. Suddenly enlightened and responding to the call of revolutionary divinities, the People spontaneously revolted against tyranny. The archetype of this dogmatic literature is the Histoire de la Révolution française by Jules Michelet, published from 1847 to 1853. It is perpetuated, to varying degrees, in the spirit of primary and secondary education textbooks and in cultural news delivered by the mainstream media. We find it sometimes in the liberal-Jacobin form, sometimes in the socialist form, the latter mainly deriving from L’histoire socialiste de la Révolution française, by Jean Jaurès (1901-1904).
Finally, a third school, that of the “traditionalist” or “counter-revolutionary” current (Edmund Burke, Joseph de Maistre, Louis de Bonald) considered the Terror as the fruit of the principles of 1789 and, more generally, that revolutionary logic inevitably leads to terror. One part of this approach, the supporters of a conspiracy theory, refers primarily to the works of the Jesuits Augustin Barruel and Nicolas Deschamps, or to those of Crétineau-Joly and Monseigneur Delassus. According to them, the French Revolution was the fruit of a triple conspiracy hatched by Jansenism, Masonry and other sects such as the Illuminati of Bavaria. This conspiracy theory has been the subject of fierce criticism from pro-revolutionary historiography, deeming it fanciful and untrustworthy. However, it received an unexpected reinforcement from Marxist or socialist historians, like Albert Mathiez, and Freemasons, like Albert Lantoine and Louis Blanc, who, without using the term “conspiracy,” insisted heavily on the “project” and on the “plan” of the Jacobin group and of the Masons, which could not be fully realized, solely because of the lack of maturity of the masses and their ignorance.
The proponents of this third school of thought point out that for the most conscious protagonists of the Revolution, the revolutionary movement was imagined and executed against Christianity, against the Church and in the last analysis against God. This is the thesis set out in the works of Louis Daménie, La Révolution (1970), and Jean Dumont, La Révolution française, ou, Les prodiges du sacrilège (1984), for whom the Revolution was persecuting and oppressive of the Church and the people, of God, because it was anti-Christian, capitalist and bourgeois.
Since the 19th century the various currents dominating French political life have not ceased to oppose and tear each other apart on the subject. On the right, for the Orleanists, the Bonapartists and soon the nationalists, 1789 is sacred, 1793 is hated. For legitimists and traditionalists, the distinction is not appropriate: 1789 announces 1793. With them, Maurras’s Action Française placed a heavy responsibility on an Old regime that had been contaminated for too long. On the left, they chose 1793. The left said, “No” to so-called human rights that it stigmatizes as individual, formal and bourgeois rights. The fascists of the 20th century followed suit. Drieu la Rochelle explained that Hitlerites and Mussolinians wanted to break with the legacy of 1789, which was liberal, but not with that of 1793, which was Jacobin and totalitarian.
For more than a century and a half, the battles of the Revolution, like its internal struggles, were an inexhaustible fuel for the political battles and ideological quarrels of the time. Under Louis Philippe (1830-1848), after the adventures of the Revolution and the Empire, French liberalism drew lessons from the double experience. It refocused on the right. The golden mean, eclecticism, compromise, seeking the middle ground were now the watchwords. A moderate historiography was forged by Thiers (Histoire de la Révolution, 1827) and Lamartine (Histoire des Girondins, 1847). But gradually the official discourse was radicalized on the left.
At the turn of the 20th century, outside of the usual minority, the “Revolution” was taboo. Its protests were sometimes seen as unpleasant, but it was also seen as the necessary step in achieving universal equality, freedom and prosperity. The basis of the consensus rested on a few words: “Let us forget, and do not question what is achieved.”
The first specialized chair in the history of the Revolution was created at the Sorbonne in 1866, on the initiative of the Council of Paris. It was occupied by Alphonse Aulard. The act was clearly political. Aulard until then taught only literature and philology. On the other hand, he was an ardent republican, appreciated by the authorities and by Clemenceau. Radical and aggressive, he was soon overtaken on his leftism by his pupil and rival, Albert Mathiez. The master was radical and anticlerical, the disciple was radical-socialist and Robespierrist. The two antagonists imposed their truth on the Sorbonney.
Allied to communism in 1917, Mathiez presented himself and imposed himself for succession to Aulard in 1926. To his posterity belonged primarily Georges Lefebvre, Albert Soboul, Michel Vovelle and Claude Mazauric – and later, in the 2000s, the pure Jacobins, Jean-Pierre Jessenne and Michel Briard. All were or had been militants, sympathizers or “fellow travelers” of the Communist Party. They reigned almost unchallenged over the French University for more than forty years. “The Revolution,” historian Pierre Chaunu would say, “was the privileged place of ideological manipulation padlocked by a Sorbonicole nomenklatura from Mathiez to Soboul… Masters who knew the way, ensured the scholastic self-functioning in a vacuum closed by the monopoly of recruitment.” They have thus built “one of the most beautiful monuments of institutionalized stupidity” (Le Figaro, December 17, 1984).
After the Second World War, the ideological and cultural hegemony of Marxism oriented and directed official historiography. In the 1960s, Albert Soboul still appeared as “the great specialist whose work is essential.” Intellectual terrorism marginalized or condemned to silence the independent, non-conformist researcher. The revolutionary catechism mechanically identified revolutionaries with the capitalist bourgeoisie.
This catechism made 1789 the first step in a process of which 1917 and the Russian Revolution was the final step. This thereby legitimized the Jacobin=Bolshevik equation.
But times and fashions change. The 1970s and the beginning of the 1980s were marked by a major break. English-speaking historians, little suspecting of espousing the quarrels of French academics, were the first to open the breach. Let us take two titles among others. The first, The Debate on the French Revolution, 1789–1800 by Alfred Cobban, was published in 1950 and translated into French in 1984, after the author’s death. This incisive book helped shake the commonplaces and conventionalisms of the Marxist vulgate, destroying the simplistic thesis of a bourgeois and capitalist revolution which would have replaced an old feudal regime.
Not everything is memorable in Cobban’s work. Thus, he is wrong when he insists on the population explosion. Pierre Chaunu demonstrated, with Jacques Dupâquier and Jean-Pierre Bardet, that France was the country in Europe where the population had increased the least (from 22 to 28 million in a century), and whose demographic dynamism was broken, everywhere, twenty years before 1789. But we must nevertheless salute in Cobban the first truly operative iconoclastic approach.
A second English, quite remarkable, should be cited – The French Revolution and the Poor by Alan Forrest, 1981 (translated into French in 1986), in which the author masterfully dismantles the mechanism of the evils of revolutionary ideology, the deadly refusal of realities on the part of revolutionary leaders.
François Furet and Denis Richet took up where these two English-world authors left off. In La Révolution française (1965), they tackled the already old thesis of the slippage from a first liberal revolution of the elites to a second Jacobin revolution. From a position that was still on the left, since they refused to take the plunge and to think that 1793 could have been contained to a certain extent in 1789; or, in other words, they refuse to think that the logic of the revolution carried massacre, extermination and genocide within it. Nevertheless, they did undermine Marxist dogmas.
A former communist (1947-1959), François Furet did not yet distinguish clearly enough between Jacobin liberalism (Latin, essentially egalitarian), from English liberalism (essentially elitist or even aristocratic). But in 1978, in Penser la Révolution, he rehabilitated the forgotten and proscribed analyses of Tocqueville, of Taine, even of Augustin Cochin. Notably absent in his book is Edmund Burke, the brilliant Irishman who differentiated 1793 from 1790. Furet’s work was later continued by Patrice Gueniffey (see, La politique de la Terreur: essai sur la violence révolutionnaire, 1789-1794). But it may also be useful to recall the words that Pierre Chaunu confided to me: “When Furet and I discuss in private the origins, causes and consequences of the Revolution, know that we are 90% in agreement.”
An important point must be stressed – the reflection initiated by French academic historians on revolutionary terror comes at the very moment when Marxist ideology is experiencing its first major cultural setbacks. At the top of the state (François Mitterrand was then president), reactions were quick to come. Max Gallo, spokesman for the socialist government, was sounding the alarm bells.
Gallo, historian, novelist and essayist, a former Communist who joined the PS in 1981, then reacted as the guardian of the temple. He left the Socialist Party and supported Sarkozy’s UMP in 2007, but in the 1980s, he was at the forefront of the political and cultural struggle of the Mitterrandist left. Censoring the new Muscadins in an Open Letter to Maximilien Robespierre, he churned out articles and virulent statements against them in the media. The politically condemned academics were accused of nothing less than Vichyism, even Nazi nostalgia. They were “guilty,” he said, of spreading a “right-wing” vision of the Great Revolution. Behind him were the ex-fellow travelers of the communist organizers, responsible for more than 100 million deaths around the world. All shamelessly set themselves up as masters of republican morality. Ridicule is not fatal!
It did not matter to Gallo and his political friends at the time that non-university historians, such as Jean-François Chiappe or Jean Dumont, published anti-revolutionary works. What was unbearable and unacceptable to them was “the betrayal of the University.”
One of the main targets of the socialist authorities was Pierre Chaunu, professor at the Sorbonne, member of the Institut de France (of the l’Académie des sciences morales et politiques). The prestige of this Protestant historian was considerable. Renowned Hispanist (author with his wife Huguette Chaunu ofSéville et l’Atlantique, 1504-1650, in 11 volumes), specialist in classical European civilization (La civilisation de l’Europe classique), and enlightenment civilization in Europe (La civilisation de l’Europe des Lumières), founder of “quantitative history,” he was one of the outstanding figures of French academia.
The genocide thesis was supported by Reynald Sécher first in 1986 and 25 years later in Du génocide au mémoricide. The Robespierrist point of view, denying the genocide, is still developed in particular by Jean-Clément Martin. The losses are estimated at a minimum of 100,000 souls from a total population of 800,000 inhabitants.
Clearly, the death of Lenin-Soviet eschatology has done immense damage to the Marxist and crypto-Marxist historiography of the French Revolution. The simplistic idea, popularized by vulgar Marxism, that the French Revolution is a bourgeois revolution which destroyed feudalism and replaced it with a new, essentially capitalist regime, is totally questioned. The objections are significant.
Professor Emmanuel Leroy Ladurie, himself a former Communist, sums them up in these terms: “The first is that the bourgeoisie which made the revolution is not a capitalist class of financiers, traders or industrialists, who were then ‘apolitical’ or ‘aristocrats.’” The bourgeoisie was thus juridical; it was composed of officers, civil servants, lawyers, doctors, intellectuals, whose role and action could not consist in giving birth to an industrial revolution. Second objection, the example of England shows that in a rural society, like 18th century France, the evolution towards agricultural capitalism passed through the great seigniorial domain. On the contrary, the Revolution tended towards the fragmentation of farms and further retarded their technological progress. Finally, the third objection, “it put a definite halt to big capitalism, that is to say, colonial capitalism, foreign trade and big industries.” Foreign trade did not regain its high level of 1789 until 1825. The Revolution “represented in a sense the triumph of the landed strata of society, conservatives, large and small, including many former nobles… a landed bourgeoisie… and finally small peasant owners” (Le Figaro, December 17, 1984).
The revolutionary explosion of the summer of 1789 appears to be the culmination of the contradictions of the Ancien Régime, which was unable to reform in time. At the origin of the Revolution there was the financial crisis – the debt had become too heavy a burden for the finances of the kingdom. The expenses of the American War were too great. After forty years of economic expansion and prosperity, the situation deteriorated in the 1780s. A succession of bad harvests, the great drought of 1785, and a particularly harsh winter increased the difficulties.
At the same time, an “aristocratic reaction” from the traditional nobility and the nobility of the robe challenged absolute monarchy and demanded parliamentary control, which would allow it to better retain its privileges and prerogatives in the face of the rise of a bourgeoisie that desired its share of power. To this can be added the evolution of ideas, the social critiques of Jean-Jacques Rousseau and the encyclopedists, the determining role of “Sociétés de pensée.” Finally, we must also take into account the errors of Louis XVI, who was undoubtedly a good man, but who was not on top of things.
Scientific history has shown that the Revolution ruined France; that it broke its economic momentum; and that it downgraded it. This is the conclusion of the most serious studies. Le livre noir de la Révolution française, published in 2008, leaves little room for doubt. But while academic research has shed all kinds of light on the horrific gray areas of the French Revolution, its results still needed to be accessible to the general public. Pierre Chaunu achieved this objective, thanks to a comprehensive, rigorous and attractive work, Le grand déclassement (1989), about which it is appropriate to say a few words.
Republican, Protestant and Gaullist, hardly suspected of sympathy for the “complex of counterrevolutionary sensibilities,” Pierre Chaunu (1923-2009) has undermined the sacrosanct myth of the two revolutions, one liberal, the other authoritarian, centralizing, liberticide – striking head-on one of the pillars of official historiography. And he got it right.
Let us sum up his argument. In 1789, France was roughly fifth in size, in Europe; but in power, it commanded nearly a third of available resources. Overall, she was pretty much the first. An example – the literacy rate was higher in England, Scotland and a few provinces of Prussia, but France had many more literate people than England, Scotland, Prussia and practically as many as the rest of Europe. Between 1710 and 1780, the number of those who reached the stage of independent reading and fluent writing tripled and quadrupled. After the great ebb that the Revolution brought about in this area, in which progress did not resume until 1830.
In 1789, France had at least 28 million inhabitants. Its population growth rate of 0.5% per year was one of the lowest, if not the lowest, in Europe. 16% of the population was urban. The distribution of the population was relatively more equitable than in the rest of Europe. Two million households owned 40% of the land (with some 5% of communal property). The rest of the land belonged to the nobility (25%). Finally, 10% was Church property, and 25% bourgeois property. The Third Estate therefore owned 65% of the land and the so-called “clergy” were, in large part, social assets, that supplied schools and hospitals. Peasant property was encumbered with seigneurial rights, but they were more irritating and vexatious than limiting. Common land was more expensive on average than noble land.
Overall, seigneurial rights were less heavy in France than anywhere else on the continent, except in England. Nowhere else was peasant ownership so widespread.
England broke the record in the West for the concentration of land in a few hands. But in France, the Revolution would not change anything. Since taxes were generally heavier afterwards, the levy on the peasant mass was roughly the same in 1815 as in 1789. Social upheavals affected only one tenth of the population at most. The Revolution only distributed a tenth or a fifth in value of a good part of the land, and therefore brought wealth and prestige to just a minority of its apparatchiks and associates. It was all really restricted to a few permutations at the top.
In 1758, the tax burden per capita was double in England, being around 190 in 1789, while France was at 100. At equal wealth, from the second half of the 18th century, the English paid at least one and a half times as much as the French. France, a tax haven, would nevertheless engage in a rotating strike. Almost 15% of the GNP and 3.5% of the population were in the service of the state. The King of France had 10 times fewer men on hand to control his capital than the King of Prussia, or the British Parliament.
The entire 18th century was driven by the halt that the Parliaments, recklessly reestablished in their prerogatives by Louis XVI, brought to the ministerial reform initiative. Here was the drama of the monarchy. From 1774 to 1789, the Parliaments, where only the privileged strata of society were to be found, were the winners across the board. The Ancien Régime was paralyzed by the encroachment of the law. Parliaments do not represent society, either in their composition or in their thoughts. The court was never less costly, yet its usefulness never less evident. The system was jammed, unable to match its resources to its needs; it was considered tyrannical when it was only powerless.
In 1789, most French people were Catholics and most were devout – 97 to 98% of the French people believed in God, more than 80% were attached to their Church. On the intellectual and moral quality of the clergy and on their generosity, which redistributed a good half to the poor and a share to hospital and school assistance, there was no real criticism, no bad marks. Better, the almost unanimous claim of the register of grievances is that the priests, who were well-loved and whose worth was widely felt, should be give more.
Finally, no one died had of hunger since 1709. It was not until 1794-1795 and run-away inflation that the specter of famine loomed again and that people died from it as before. In the 1780s, faced with a population growing at a rate of 0.5% annually (a growth rate lower than the English rate), production increased at a rate of 1.9%.
Paradoxically, in general, it is prosperity, not misery, that carries the risk of revolution. French society remained sufficiently open. France lived, changed, evolved. The state was jammed, motionless, paralyzed. Between the two, tensions continued to grow.
The Revolution began with a plunder, the easiest – that of the property of the Church. Monastic France was soon sold. The finest jewels of Romanesque and Gothic art were broken. They were removed, disassembled, sawed apart, broken, looted. The artistic rampage was immense. No modern war has destroyed so much wealth.
The Revolution was not the mass phenomenon that they want us to believe. There were 50,000 Parisian sans-culottes, 80,000 profiteers of national property and 200,000 onlookers. The number of angry, hateful and guillotinous dechristianizers hardly exceeded 40,000. But you only win and lose if you convince the small active number.
When, on July 12, 1790, the civil constitution of the clergy was adopted, only 4 bishops out of 136 agreed to swear to the constitution. 44% (40% after withdrawals) of the clergy swore. This is not much, because not to swear meant the loss of employment, of all resources, misery, the threat to freedom and life, banishment from the community. Since dozens of episcopal seats had to be filled all at once, Talleyrand, “a pile of shit in a woolen stocking” as Napoleon called, devoted himself to the task. He was the only one of four bishops who accepted to carry out coronations. All those poor jurat priests, some of whom claimed to have rediscovered the simplicity and rigor of the primitive Church, would know by the end of the winter of 1791 what the words of the constituent deputies were worth – a reprieve for the guillotine.
Thanks to the assignat, famine and the ruin of the economy, people died as much and more than from the guillotine during the winter of 1794-1795. The famous paper money assignat was criminal folly. To pay off its promises, fuel its fantasies and finance the war of aggression against a peaceful Europe, the Revolution had only one means – inflation, the most unjust tax.
The mortal sin of the Revolution was, after religious persecution, gratuitous war. The war allowed murder to be legalized, any internal opponent being equated with foreign enemies.
For the period 1792 to 1797, losses amounted to at least 500,000 men. Disease killed more than bullets (3 to 4 times more). If we add the civilian losses, men, women, children (mainly in Vendée), the losses of the revolutionary period came to cost nearly 1 million human lives. The Empire would add a second million to the first. In total, 4.5 to 5 million dead, in a Europe of less than 150 million souls. The responsibility, all the responsibility for the outbreak of war rests with the revolutionary power. It deliberately chose war; it provoked, attacked, invaded.
The war broke France’s growth; it slowed it down everywhere else, even in Great Britain, but in which case the slowdown only affected consumption. In the France-England equality ratio, we go to a gap of 10 to 6. France had, per capita, caught up with England in 1789; it was in the ratio of 100 to 60-65 in 1799. Ten years of assignats and the great massacres definitively downgraded France. The gap would no longer be made up.
Let Pierre Chaunu conclude in a concise manner: “While all the work of history released from the myth establishes that the chaotic process which created the revolutionary vortex was the effect of chance – September (1792), Fouquier Tinville (public accuser of the Revolutionary Tribunal), ruin by the assignat, and the war, the destruction of the artistic, moral and religious cultural heritage, the depopulation and the devastation of the demographic impetus, the genocide-populicide of the Vendée and the populicides of Lyon, Toulon and elsewhere – all this follows implacably from the most implacable revolutionary logic. Once the Revolution is born, it kills. Death is its profession, annihilation its end.”
Product of chance, execution of a deliberate project, direct or indirect consequence of one or more social factors, the debate on the interpretation of the Revolution is not about to end. But one point is clear – for rigorous and serious history, the Revolution led France to a terrible moral, social, economic and political collapse.
Arnaud Imatz, a Basque-French political scientist and historian, holds a State Doctorate (DrE) in political science and is a correspondent-member of the Royal Academy of History (Spain), and a former international civil servant at OECD. He is a specialist in the Spanish Civil War, European populism, and the political struggles of the Right and the Left – all subjects on which he has written several books. He has also published numerous articles on the political thought of the founder and theoretician of the Falange, José Antonio Primo de Rivera, as well as the Liberal philosopher, José Ortega y Gasset, and the Catholic traditionalist, Juan Donoso Cortés.
The image shows, “The Zenith of French Glory: The Pinnacle of Liberty. Religion, Justice, Loyalty & all the Bugbears of Unenlightend Minds, Farewell!”. A satire of the radicalism of the French Revolution. A picture by James Gillray. February 1793.