The Great Global Warming Fiction

There is absolutely no way that so-called Greenhouse Gas emissions can cause warming or generate heat. It is a fiction that is spread by such august bodies as the IPCC, and promulgated by news channels like the BBC and CNN and is even taught to unsuspecting children in schools.

Far from greenhouse gas emissions generating heat, precisely the opposite is true.

What is the principal Greenhouse Gas?

It is Water Vapour, which constitutes 90% of all Greenhouse Gases. How is it generated? Answer: As the infrared radiation from the Sun strikes the surface of the oceans liquid salt water is turned into a gas, Water Vapour, by evaporation. Evaporation is cooling, not warming – every simpleton knows that.

This gas, this water vapour, then rises up by convection and condenses into clouds. Clouds are also cooling as they scatter the incoming solar infrared radiation. Then rain, snow or sleet falls from these clouds. What is a common observation, most remarkably in summer?

The temperature drops, as the atmosphere cools rapidly. So here we see that the principle Greenhouse gas leads to cooling all round. It is hardly surprising that we do not hear calls for emissions of Water Vapour to be culled.

What is even more remarkable is that the salt water of the oceans is turned into fresh water to fill our reservoirs, lakes, rivers and streams, which in turn find their way out to the sea. This is the miracle of the Water Cycle – truly the miracle of water into wine, of salt water into fresh.

The transport of perishable foodstuffs depends upon refrigeration, whether by lorry, by aircraft or most importantly by container ships at sea. What is the principal refrigerant?

It is Carbon Dioxide, that most maligned of all the greenhouse gases.

Far from warming the Planet, as we are supposed to believe, that clear colourless gas is not only a coolant, but also a fire retardant and a refrigerant.

Ah! say some Physicists, sagely nodding their heads, but Carbon Dioxide absorbs infrared. In layman’s language that means it ‘warms up’. But then everything under the Sun absorbs infrared and warms up except three things.

The two principle gases of the atmosphere, Nitrogen and Oxygen, are transparent to infrared, whether incoming or outgoing, so they do not warm from the infrared. What is the third item, if one can call it an item?

It is vacuum, it is nothing, one cannot warm ‘nothing’, since there is nothing to get warm.

It is absolute folly to dismiss the Water Cycle, it is even greater folly to forget or misunderstand the Carbon Cycle. Carbon Dioxide is food for green plants on land and sea. We cannot live without Oxygen and green plants and ocean plankton need Carbon Dioxide as a food, from which Oxygen is produced as a by-product through Photosynthesis..

To treat Carbon Dioxide as a pollutant is one of the biggest mistakes mankind could make and has made these past 30 or more years. Indeed all the bad air in cities could be solved by encouraging green spaces, by the planting of more trees. We all need to be green, truly green, not hysterical and political green, which is another animal altogether.

Great Nature already has the systems in place to produce fresh water from seawater, the Water Cycle. Great Nature already has the systems in place to produce fresh air from foul air – the Carbon Cycle through Photosynthesis. So we do not need to Save the Planet, since the planet knows better than any climate scientist how to save itself.

This article is courtesy of Principia Scientific International.

The photo shows, “Cloud Study,” by Alexandre Desgoffe, ca. 19th-century.

Bad Theology And Atheism

I have been slowly reading my way through John Gray’s book, Seven Types of Atheism. It is not an argument with Atheism so much as a study of its underpinnings, strengths and weaknesses (Gray himself is an atheist). Apparently, what someone does not believe in is just as important as what someone does believe in. Not all atheisms are equal.

I was particularly struck by this note regarding the non-belief of John Stuart Mill: “Mill never claimed to have formulated a unified view of the human world. Even so, he founded an orthodoxy – the belief in improvement that is the unthinking faith of people who think they have no religion.”

It will sound quite odd to many readers, but good Christian theology has much in common with good atheist thought. Indeed, some feeble attempts at atheism are often the first genuine efforts of theological discipline taken up by many people.

An inadequate God inherited either from poor teaching or mere cultural assimilation is fertile ground for unbelief. The nakedness of such a thing (“it’s just me and the universe”) can also provide a fertile ground for serious thought.

The general dividing line within atheism seems to be between versions that represent little more than godless examples of bad Judaeo-Christian thought and versions that take seriously the absence of meaning implied by the rejection of a God-story.

These latter accounts seem to fall out with either a semi-oriental mysticism in which the universe itself plays the role of God (think Carl Sagan in his last years) or true nihilists whose world is perhaps the least comprehensible of all (and the rarest).

Human beings seem to be created with a longing for meaning. We not only experience the world, but want to make sense of it as well. That sense-making is a thread of continuity that joins every religious tradition in history.

The scope of the story may vary from place to place, but the existence of “story” is ubiquitous. Meaninglessness is not a condition that is easily embraced – indeed, it could be viewed as a form of mental suicide, with or without the dying.

Stanley Hauerwas famously defines modernity in terms of story: “The modern project is the attempt to produce a people who believe that they should have no story except the story that they choose when they had no story.”

This is another way of describing the “unthinking faith of people who think they have no religion.”

The 19th century was marked by a number of key figures whom the philosopher Paul Riceour dubbed the “Masters of Suspicion.” He specifically named Freud, Marx and Nietzsche. I would add a number of other 19th century names as well.

Their critique (suspicion) was turned towards various aspects of life that were often taken for granted. Freud “unmasked” the commonly accepted figure of God (at least as found among 19th-century Austrian Jews) and saw it as nothing more than a projection of the “super-ego.” Marx unmasked the dark drive of history as a story of exploitation. Nietszche reduced the world to the story of the raw will to power.

These “giants” have had many followers, both intentional and unintentional. What they created can best be seen in their “suspicion.” Is what we perceive in fact the case, or do we live in a world of self-deception? When we suspect the actual process of thought and understanding itself, every answer has a way of being unsatisfying.

The latest iteration of modern suspicion has been directed towards traditional perceptions or understandings of gender/sex (I’m never sure which is the right word anymore). A traditional “binary” approach is now deconstructed as a false world-view, imposed from above. In truth, the “technique” of questioning and replacement has been going on for nearly two centuries now, and shows no sign of abating.

The Masters of Suspicion were not entirely wrong. Much of what passed for Christian belief deserved questioning. An inadequate account of God should be as problematic for Christians as it was for Freud.

The glib certitude of the wealthy deserved to be unmasked. A proper Christian understanding of justice had been set aside and left the world “upside-down.” Marx forced a conversation. Nietzsche is another matter, one that I am not well able to articulate. For me, he seems to unmask the naked forces of dominance that claim to be otherwise.

There is, however, a conundrum within these suspicions. When everything is suspect, even the suspicion is suspect. We can be left with a paralyzing agnosticism, that doubts even its own agnosticism.

Nevertheless, many Christians themselves continue with an “inadequate” God. Gray points out that the notion of improvement, held by “unthinking people with no religion,” is a belief learned from Christians who had begun to secularize their faith. Improvement (“better world,” “progress”) is bad Christian theology that serves little purpose other than to underwrite the modern nation-state and consumer capitalism. It is certainly not the story of Jesus Christ.

A very deep strain of Orthodox theology is described as “apophaticism,” or the “via negativa.” It describes a knowing by “not knowing.” It is, if you will, a denial of everything God is not, in order to know who God is. As such, it is constantly deconstructing the many efforts of humanity that seek to create false gods.

Conversations that seek to unravel the circular reasoning of suspicion often end in frustration. Everything can be questioned, including the questioning. However, the Christian faith rises and falls with the death and resurrection of Jesus.

Everything flows from that moment. All Christian thought is, properly, a commentary on Pascha. It is not, at its heart, an argument from reason. Reason and its various theories, whether of meaning, or human nature, or social existence, science, etc., did not create the resurrection of Christ. It comes like an event that inserts itself into the futility of our existence.

I take comfort in the thought that the Scriptures bear witness to the lack of understanding that greeted Christ’s resurrection. The disciples did not get it. His resurrection is not an answer to a question they were asking. It seems, at first, to have confounded them.

Nevertheless, He rose from the dead. The life of repentance is a constant embracing of Christ’s Pascha. It is a giving of ourselves to what has been given to us. It is the rejection of every pretense that would erect life on some other basis (as though there were another basis).

That single event of Pascha is the beginning of Christian thought. The best of Christian thought (in my estimation) continues to allow the resurrection of Christ to unmask its every attempt to build a world (or a faith) on any other basis.

The resurrection of Christ is the judgment of this world. Its judgment is truly kind. It is truth demolishing the falsehoods that imprison us, freeing us even from ourselves.

Christ is the Master of Truth, the Master of Life and Death, the Master of Love and Forgiveness. He even forgives our suspicions.

Father Stephen Freeman is a priest of the Orthodox Church in America, serving as Rector of St. Anne Orthodox Church in Oak Ridge, Tennessee. He is also author of Everywhere Present and the Glory to God podcast series.

The photo shows, “Christ’s Entry Into Brussels in 1889,” painted in 1889.

The Death Of True Manliness

Just as it is difficult to gain a true perspective of the size of a mountain when one is actually on the mountain, so it is difficult to understand how revolutionary a change is when in the midst of the revolution.

And we are today in the midst of a great revolution, a dramatic shift in the way we understand human nature. That is, our culture in the West is changing the way it understands gender. This change is all-encompassing, and expresses itself in such large movements such as feminism, gay rights, and now transgender rights.

The change is not a matter of refining or tinkering with past approaches. Past approaches are not so much moderately altered as completely overthrown. The revolution regarding gender is radical and vociferous, and like all devout revolutionaries, its advocates are taking no prisoners, which accounts for much of the rhetoric and verbal violence in America’s culture wars.

If the Lord tarries, historians hundreds of years hence will look back on the late twentieth and early twenty-first centuries as the time when the West waged war on the way its ancestors understood gender differences from time immemorial. Those reading sociology will speak of a fundamental paradigm shift. Those reading Screwtape will wonder if the revolution was not the result of far-reaching decisions taken by “our father below”.

The ancient approach saw gender as a divine gift. Judeo-Christian texts spoke of our gendered existence with its resultant differing roles as ordained by God at creation: “So God created man in His own image; in the image of God He created him; male and female He created them” (Gen. 1:27).

Islam inherited this understanding of gender, and even the pagans who did not read Scripture of any kind understood maleness and femaleness as basic and stable categories. That is why they privileged legal marriage above unregulated sexuality. Certain pagans (Greeks for example; the Romans were slower to follow) had no problems with pederasty, but they still insisted on heterosexual marriage as the foundation for a stable society.

As far as everyone until the mid to late twentieth century was concerned, you were born either male or female, and certain rare anatomical or other medical anomalies aside, that set you on the path of life and provided you with specific roles and responsibilities.

Men were to behave in a certain way, as were women. To be sure, the prescribed behaviours contained a fair degree of latitude—“tom boy” behaviour, for example, was still acceptable for girls, and men could knit if they wanted to—but the basic path was fairly clear, even if it was wide. And this was not confined to the Judeo-Christian or the Islamic traditions. As C.S. Lewis illustrated in his book The Abolition of Man, these norms could be found in all cultures. He termed this “the Tao”, and recognized it as the universal practice of mankind.

The revolution in the West began in the 1960s, with what was then called “Women’s Lib”. Women’s Lib found cultural acceptance because much of it seemed to be simple common sense, and because the Suffragette movement demanding for women the right to vote had partly prepared the way for it.

Though not introducing radical or harmful change in the basic understanding of gender roles, Women’s Lib prepared people to regard change as essentially a good and much-needed thing, and this openness to change would continue to govern basic attitudes when more far-reaching changes were proposed.

Women’s Lib also drew heavily upon the language of racial civil rights, and presented itself in terms of an analogous struggle. The emphasis here is upon the word “struggle”, since the movement used the tactics of protest (famously with its symbolic bra-burning and its marches), and its labelling of its opponents as enemies of enlightenment and progress. The seeds of a future culture war may thus be traced to this early predilection for protest.

Despite the use of angry denunciation of perceived oppression and inflammatory rhetoric that increasingly characterized the diverse feminist movement, the radical changes first appeared with the gay rights movement. Here too we observe a progression. What began with a simple act of decriminalization continued with a demand for social acceptance of an alternative lifestyle as if it were as valid as traditional marriage.

Thus, first came demands for social acceptance and non-discrimination, then came a demand for the provision of legal civil unions between homosexuals, and then a demand for providing legal marriage for them. Inherent in these demands was the assertion that maleness and femaleness were not all-encompassing roles, but simply anatomical realities which did not bring with them any societal roles or norms.

Thus, one could be born anatomically male but still seek sexual union (socially legitimized through marriage) with another male—or, with both males and females. Anatomy had been definitively sundered from gender role and its accompanying sexual “preference”. Indeed, the very language used—“sexual preference”—presupposes that one gender could be preferred as easily as another.

Formerly, men did not just “prefer” women, but were ordained to this choice, if not by internal sexual desire for women over men, then at least by divine law. Now one could “prefer” male to female as easily and legitimately as one could prefer chocolate to vanilla.

The next step was to sunder anatomy not just from gender role but from gender identity. In this move to legitimize transgenderism, it was asserted that one could be born anatomically male and yet still “be” a woman. There was no objective way to tell if a person “was” a male or a female.

All now depended upon a person’s subjective feelings and which gender one “identified with”. And throughout this long progression of change, its advocates continued to employ the rhetoric of civil rights, indignantly denouncing their opponents as bigots and cultural Neanderthals. The culture wars were now raging loudly. In the din, the voice of the historic Christian Faith, replete with both inviolable standards and subtle nuanced distinctions, was usually shouted down.

Thus those who identify as gay or transgender now occupy the role of noble victim in constant danger of harm, while those who oppose the new revolution occupy the role of dangerous cultural criminals, whose bigoted opposition to the new revolution threatens very lives of those in the LGBQT community. Those assigning these roles are often driven by a self-righteousness that takes no prisoners and justifies any amount of hatred, anger, and bullying.

The revolution is poised to continue, driven as it is by its own interior logic. If physical anatomy counts for nothing, then it counts for nothing. If the will (or preference) is sovereign, then it is sovereign. That includes not just the gender of the sexual partner, but also the number of partners. Or the age of the partners.

Paedophilia (or “minor attraction” as it calls itself) is currently beyond the pale of general acceptability, but the landscape of the debate and its borders are shifting quickly. No one living in 1950 could have foreseen the current situation. It is therefore possible that the presently radical call for the acceptance of “minor attraction” will one day become mainstream. Where the revolution will end is anyone’s guess. I myself believe that the end is not yet in sight.

The question remains: what is the problem with the revolution? Who is it hurting? Granted that the gender revolution (or “gender confusion”, depending upon point of view) overturns the way humanity has regarded itself since the beginning, why it that wrong? Much could be said, but a single reply will have to suffice. In the new paradigm offered us, what was once regarded as “true manhood” is labelled toxic in some places, and is fast becoming extinct.

What does it mean to be a “real” man? True manhood involves more than simple sexual “preferences” or the question of who takes out the garbage. It involves primordial self-defining symbolism and emotions springing from the deepest hidden levels.

To be a real man is to relate to those weaker—notably women and children—with gallantry, protection, and self-sacrifice. (Christians will note that this is how Christ, as a real Man, related to His bride, the Church.) We note this in a thousand ways: the man proposes the woman on bended knee, (not vice-versa), and in situations of danger, the man defends the woman even at the cost of his life. And this last example applies not just to the man’s own wife, but to any woman, precisely because she is a woman. Womanhood was considered as sacred per se.

This could be observed in the investigations following the sinking of the Titanic: witnesses were emphatic that some lifeboats contained only women and children, the men sacrificing themselves to save them. Doing anything less—taking a space in a lifeboat that could have been taken by a woman or a child—would have violated their manhood. Manhood and masculinity, increasingly derided as toxic by definition, included both the symbolism and actions of gallantry. A true man was a knight.

It is true of course that acts of bravery and self-sacrifice can be and are done by women and children, and of course by homosexuals and transgenders. Anyone can become brave. But that is just the point: since bravery and self-sacrifice are no longer part of what it means to be a man, one does such heroic acts only if one is a hero.

But heroism is not common (which is why it is applauded when found). One may or may not feel oneself called to heroism and bravery. But in the old paradigm a man sacrificed himself not because he felt called to extraordinary heroism, but simply because he was a man. The gender role he inherited by virtue of his anatomy contained within it the moral imperative of sacrificing himself, if need be, for women and children.

It is just this protection that real men once offered that is so desperately needed now. We now rely upon “public education” (i.e. propaganda) and the stigma attached to being politically incorrect to motivate people to gallantry, self-sacrifice, and bravery.

We can see how well this is working (or not working), by how dangerous the nights remain for women and other vulnerable people. The cry of those trying to educate the public is to “take back the night”. More helpful perhaps would be sustained reflection upon how the night was lost in the first place.

Father Lawrence serves as pastor of St. Herman’s Orthodox Church in Langley, BC. He is also author of the Orthodox Bible Companion Series along with a number of other publications.

The photo shows, “Les batteurs de pieux,” by Maximilien Luce, painted ca., 1902 to 1905.

Crucifixion, Part 2

Blood loss from the scourging helped determine the time the victim survived. In any case, victims suffered a long time (at most, days) before falling into prolonged unconsciousness and death. Soldiers typically did not hasten things along because a long and painful death was the point of the execution method. Usually the victim was left on the cross until birds and wild beasts consumed the body.

Death could result from a variety of causes, including blood loss and hypovolemic shock, or infection and sepsis, caused by the scourging that preceded the crucifixion or by the nailing itself, and eventual dehydration. A theory attributed to French surgeon Dr. Pierre Barbet (author of A Doctor at Calvary: The Passion of Our Lord Jesus Christ As Described by a Surgeon) holds that, when the whole body weight was supported by the stretched arms, the typical cause of death was asphyxiation. He conjectured that the condemned would have severe difficulty inhaling, due to hyper-expansion of the chest muscles and lungs.

The condemned would therefore have to draw himself up by his arms, leading to exhaustion, or have his feet supported by tying or by a wood block. Indeed, the executioners were sometimes asked that the legs of the victim were broken or shattered, an act called crucifragium which was also frequently applied without crucifixion to slaves.

This act speeded up the death of the person but was also meant to deter those who observed the crucifixion from committing offenses. Once deprived of support and unable to lift himself, the victim would die within a few minutes.

Experiments by Dr. Frederick Zugibe, former chief medical examiner of Rockland County, New York have revealed that, when suspended with arms at 60° to 70° from the vertical, test subjects had no difficulty breathing, only rapidly-increasing discomfort and pain. This would correspond to the Roman use of crucifixion as a prolonged, agonizing, humiliating death.

Zugibe claims that the breaking of the crucified condemned’s legs to hasten death was administered as a coup de grâce, causing severe traumatic shock or hastening death by fat embolism. Crucifixion on a single pole with no transom, with hands affixed over one’s head, would precipitate rapid asphyxiation if no block was provided to stand on, or once the legs were broken.

It is possible to survive crucifixion, if not prolonged, and there are records of people who did. The historian Josephus, a Judean who defected to the Roman side during the Jewish uprising of 66-72 AD, describes finding two of his friends crucified. He begged for and was granted their reprieve; one died, while the other recovered. Josephus gives no details of the method or duration of their crucifixion before their reprieve.

It is still a matter of debate whether victims were crucified in the nude or with their loincloths left on. There is no doubt that many (if not most) crucifixion victims were stripped naked, either with or without a loincloth, as it would have humiliated the victim further. This is one of the elements which made crucifixion notorious: due to the physical, mental and emotional pain it caused.

While traditionally Jesus and the two criminals are depicted as having a sort of loincloth for modesty (in a few depictions, Jesus even wears a full-length robe, called a colobium), a few very early depictions depict the victim as either being stark naked on the cross or with some loincloth on (also see illustration at left and below right, one of which is a graffito found in Puzzuoli, with the other being a gem found in Syria, dating from the late 2nd-3rd century). As a general rule of thumb, most of these early representations are not depictions made by Christians, who still didn’t depict the Crucifixion overtly during this time period, but were usually created by non-Christians and/or Gnostics.

While some take the position that Jesus was not spared even a loincloth when He was crucified, some believe that due to Jewish sensibilities, loincloths were left on or provided (it would be fitting to remind here that many people in ancient times did not even wear loincloths; for them, their tunics served as their undergarment). So, before we could have any conclusive evidence, it would seem that the best answer here for the moment is that it depended on the situation and the location.

The gibbet on which crucifixion was carried out could be of many shapes. Josephus records multiple tortures and positions of crucifixion during the Siege of Jerusalem as Titus crucified the rebels; and the Roman historian Seneca the Younger recounts (To Marcia, On Consolation, 6.20.3): “Video istic cruces non unius quidem generis sed aliter ab aliis fabricatas: capite quidam conversos in terram suspendere, alii per obscena stipitem egerunt, alii brachia patibulo explicuerunt. Video fidiculas, video verbera, et membris singulis articulis singula docuerunt machinamenta: sed video et mortem…” [I see there crosses, not merely of one kind but fashioned differently by others: a certain one suspends with head down towards the ground, others drive stakes through their private parts; others stretch the arms out on the gibbet; I see cords, I see whips, and contraptions designed to torture every joint and limb, but I see death as well…]

At times the gibbet was only one vertical stake, called in Latin crux simplex or palus. This was the simplest available construction for torturing and killing the criminals. Frequently, however, there was a cross-piece attached either at the top to give the shape of a T (crux commissa) or just below the top, as in the form most familiar in Christian symbolism (crux immissa). Other forms were in the shape of the letters X and Y.

While the view that Jesus died on a stake has thus been propounded by writers of the nineteenth and twentieth century (and is still popular among Jehovah’s Witnesses), second-century writers, such as Justin Martyr and Irenaeus, who were much closer to the event, speak of him only as dying on a two-beam cross.

In the same century, the author of the Epistle of Barnabas and Clement of Alexandria saw a two-beam shape of the cross of Jesus as foreshadowed in a numerological interpretation of Genesis 4:14, and the first of these, as well as Justin Martyr, saw the same shape prefigured in Moses keeping his arms stretched out in prayer in the battle against Amalek. At the end of the same century, Tertullian speaks of Christians as accustomed to mark themselves repeatedly with the sign of the cross, and the phrase “the Lord’s sign” (τὸ κυριακὸν σημεῖον, to Kyriakon simeion) was used with reference to a cross composed of an upright and a crossbeam. Crosses of † or Τ shape were in use, even in Palestine, at the time of Jesus.

See here for more in-depth discussion on the shape of Jesus’ cross.

In popular depictions of crucifixion, the condemned is shown with nails in the palm of their hands. Although historical documents refer to the nails being in the hands, the word usually translated as hand, “χείρ” (cheir) in Greek, referred to arm and hand together, so that, words are added to denote the hand as distinct from the arm, as “ἄκρην οὔτασε χεῖρα” (Akrin outase cheira, “he wounded the end of the ‘cheir'”, i.e. he wounded her hand).

A possibility that does not require tying is that the nails were inserted just above the wrist, between the two bones of the forearm (the radius and the ulna). The nails could also be driven through the wrist, in a space between four carpal bones. The word χείρ, translated as “hand”, can include everything below the mid-forearm: Acts 12:7 uses this word to report chains falling off from Peter’s ‘hands’, although the chains would be around what we would call wrists. This shows that the semantic range of χείρ is wider than the English hand, and can be used of nails through the wrist.

An experiment that was the subject of National Geographic Channel’s documentary entitled, Quest For Truth: The Crucifixion, showed that a person can be suspended by the palm of their hand. Nailing the feet (or the ankles) to the side of the cross relieves strain on the wrists by placing most of the weight on the lower body.

Another possibility, suggested by Frederick Zugibe, is that the nails may have been driven in at an angle, entering in the palm in the crease that delineates the bulky region at the base of the thumb, and exiting in the wrist, passing through the carpal tunnel.

A footrest attached to the cross, perhaps for the purpose of taking the man’s weight off the wrists, is sometimes included in representations of the crucifixion of Jesus, but is not mentioned in ancient sources. These, however, do mention the sedile (a small piece or block of wood attached to the front of the cross, about halfway down, where the victim could rest) which could have served that purpose.

The question has long been debated whether Jesus was crucified with three or with four nails.

The treatment of the Crucifixion in art during the earlier Middle Ages strongly supports the tradition of four nails, and the language of certain historical writers (none, however, earlier than Gregory of Tours, “De Gloria Martyrum”, vi), favors the same view. The earliest depictions of the subject might also favor this view, as they generally depict the feet of the victim as being separate from each other.

On the other hand, in the thirteenth century, most of Western art (with a few exceptions; see the image to the right, painted by Diego Velázquez in 1632) began to represent the feet of Jesus as placed one over the other and pierced with a single nail. This accords with the language of Nonnus and Socrates and with the poem “Christus Patiens” attributed to St. Gregory Nazianzus, which speaks of three nails.

This depiction of three nails had actually caused some controversy when it was first introduced. For example, in the latter part of the 13th century the bishop of Tuy in Iberia wrote in horror about the ‘heretics’ who carve ‘ill-shapen’ images of the crucified Jesus ‘with one foot laid over the other, so that both are pierced by a single nail, thus striving to annul or render doubtful men’s faith in the Holy Cross and the traditions of the sainted Fathers.’

Archaeological criticism has pointed out however not only that two of the earliest representations of the Crucifixion (the Palatine graffito does not here come into account), viz., the carved door of the Santa Sabina in Rome, and the ivory panel of the British Museum, show no signs of nails in the feet, but that St. Ambrose (“De obitu Theodosii” in P.L., XVI, 1402) and other early writers distinctly imply that there were only two nails. However, this does not answer why in Luke 24:39-40 Jesus is said to have shown ‘his hands and his feet’ to his disciples, unless there was some distinguishing mark located there.

St. Ambrose informs us that Empress Helena had one nail converted into a bridle for Constantine’s horse (early commentators quote Zechariah 14:20, in this connection), and that an imperial diadem was made out of the other nail. Gregory of Tours speaks of a nail being thrown (deponi), or possibly dipped into the Adriatic Sea to calm a storm. It is impossible to discuss these problems adequately in brief space, but the information derivable from the general archaeology of the punishment of crucifixion as known to the Romans does not in any way contradict the early Christian tradition of four nails.

Patrick lives in Japan. He supports the Extraordinary Form of the Roman Rite according to the Missal of Bl. Pope John XXIII.

The photo shows the “Crucifixion Fresco” from the fifth century Ancient Church of Saint Mary (the Santa Maria Antiqua). The fresco dates from ca. 741 to 752 AD.

Population And Its Decline

Anybody who has been paying attention has long grasped the truth: under-population, not overpopulation, is our problem. This will soon be true on a global scale, it is already true in most of the developed world. Empty Planet explains why this is undeniably so.

Unfortunately, the explanation is shrouded in confusion and ideological distortion, so the authors are never able to provide a clear message. Instead, they offer rambling, contradictory bromides combined with dumb “solutions” until the reader throws his hands up in despair, as I did. But then I got a stiff drink, finished the book, and now am ready to tell you about it.

The authors, two Canadians, Darrell Bricker and John Ibbitson, offer an apparently complete story. Every part of the world is becoming more urbanized. Urbanization causes a drop in the fertility rate, for three reasons.

First, when off the farm, children are a cost center, rather than a profit center. Second, urbanized women choose to have fewer children. Third, urbanization means atomization of social life, such that the networks in which people were embedded, most of which exercised pressure to have children, disappear, and if replaced, are replaced by friends or co-workers who do not exercise the same pressure. “Family members encourage each other to have children, whereas non-kin don’t.”

These causes of population decline are exacerbated by two other factors not tied to urbanization—the worldwide decline of religious belief, and lower infant and child mortality, which means people don’t have children as insurance. And the end of the story is that when the fertility rate drops far enough, it is, in the modern world, permanent. It is the “fertility trap,” analogous to the well-known “Malthusian trap.”

Why do urbanized women choose to have fewer children (aside from the other two stated reasons, expense and less family pressure)? The authors cite the desire for a career; the desire for autonomy and empowerment; the desire to escape the control of men; and the desire for “crafting a personal narrative.”

All of these things the authors tie to “education,” or, in their unguarded moments and more accurately, “being socialized to have an education and a career.” That is, modernity leads to women choosing to have fewer children, often no children at all, and far fewer children than are necessary to replace the people we have now.

Why the fertility trap? It’s due to two totally separate causes. One is mechanical—if a society has fewer children, obviously there will then be fewer women to bear new children. But the other is social. When there are fewer children, “Employment patterns change, childcare and schools are reduced, and there is a shift from a family/child oriented society to an individualistic society, with children part of individual fulfilment and well-being.”

In other words, it’s not a trap, it’s a societal choice. Interestingly, according to the authors, drops in the fertility rate, and therefore the fertility trap, are not the result of legalized abortion and easy contraception, as can be seen from examples of fertility problems prior to the 1960s.

For example, the birth rate was briefly at less than replacement in much of the West prior to World War II, when contraception was much less common, and abortion very much rarer (it is a total myth that illegal abortion was widespread prior to the modern era, at least in the West).

But abortion and contraception certainly contribute to the fertility trap. That is, it is societal factors that cause the fertility rate to drop, but all else being equal, the easier it is to prevent (or kill) children, the harder it is to climb back up. In any case, the result is the same—fewer people, getting fewer.

Empty Planet then sequentially examines Europe, Asia, Africa, and South America. There is a great deal of annoying repetition. Nonetheless, there is also much interesting data, all in support of the basic point—population everywhere is going to go down, soon and fast. True, the United Nations predicts that global population will top out at eleven billion around 2100, and then decline.

The authors instead think, and make a compelling case that, the United Nations overstates fertility in the twenty-first century. The authors say, and do a good job demonstrating why, population will top out at nine billion by around 2050 (it is seven billion now) and then decline. Some declines will be precipitous and startling—China, currently at 1.4 billion but deep into the fertility trap, will have 560 million people by the end of the century.

Strangely, the authors do not calculate global population estimates around, say, 2150, but eyeballing the numbers, it appears they will be around two or three billion, maybe less—and heading downward, fast.

Bricker and Ibbitson are not kind to overpopulation doomsayers. They note how completely wrong those of the 1960s and 1970s, such as the infamous Paul Ehrlich, have been proven. (Charles Mann does it better in his excellent The Wizard and the Prophet).

Bizarrely, Ehrlich is unrepentant, to a degree that suggests he is unhinged; the authors quote him as saying in 2015, without any reasoning, “My language would be even more apocalyptic today,” and analogizing children to garbage.

They don’t believe modern doomsayers are any more correct. Most just have no factual basis for their claims, which are basically just anti-human claims of a religious nature, and the authors even dare to note the obvious fact that the United Nations, a device primarily used to extract money from the successful economies of the world and give it to the unsuccessful, has a vested interest in exaggerating the problems of the backward parts of the world.

So what problems result from an aging and then declining global population? Economic stagnation is what the authors focus on. This is driven by less consumer demand, but also, less visibly but more importantly, by less dynamism.

Old people are takers, not makers. Moreover, they don’t do anything useful for driving society forward, let’s be frank. Not that the authors are frank; they skip by the dynamism problem without much comment, though at least they acknowledge it. But the reality is that for human flourishing, the dynamism of the young is everything, and far more important than consumer demand.

One just has to think of any positive accomplishment that has changed the world, in science, art, exploration, or anything else. In excess of ninety percent of such accomplishments have been made by people under thirty-five. (Actually, by men under thirty-five, for reasons which are probably mostly biological, but that is another discussion).

The simple reality is that it is the young who accomplish and the old who do not. And when you have no young people, you have no accomplishments. Our future, on the current arc, is being the Eloi; hopefully there will be no Morlocks.

Governments from Germany to Iran recognize this problem. The authors give numerous examples, all failures, of trying to resolve the problem by, in effect, begging and paying women to have children. Even here, the authors feel obliged to tell us “The idea of governments telling women they should have more babies for the sake of the nation seems to us repugnant.”

We are not told why that should be so, probably because it is obviously false, but regardless, it is clear that a modern government merely instructing or propagandizing women isn’t going to do the trick.

What is the authors’ solution, then? They don’t have one. Well, they have a short-term one, or claim to. Much of the back half of the book is taken up with endless variations on demanding that the West admit massive amounts of Third World immigrants.

The claimed reason for this is necessity—without immigration, Europe and North America will not have enough taxpayers to support the old in the style they desire. They realize the disaster that’s befallen Europe by admitting alien immigrants with nothing but their two hands. (They claim to reject the Swedish “humanitarian” model. But all their soaring language of untethered and unexplained moral duty implicitly endorses the humanitarian model).

Instead, they recommend the Canadian system to America, where only the cream of the crop, educated and with job skills, is admitted—but we must, must, must immediately admit no fewer than 3.5 million such immigrants every year.

And, of course, they fail to point out that the cream of the crop is by definition a tiny percentage of the overall amount of immigrants, so how exactly we are going to welcome only these worthwhile immigrants is not clear, especially if other countries are competing for them.

Nor do the authors point out that at best, this is a short-term solution—if every country in the world will soon have a less-than-replacement birth rate, emigration will soon enough become rare, so no amount of competition will attract enough people.

Therefore, their “solution” is no solution at all, and beyond this, Brickell and Ibbitson have nothing to offer, except muttering about how it’ll be nice to have a cleaner planet when there are no people to enjoy the clean planet.

I note that the authors do not tell us how many children they have, which seems highly relevant. If you are going to be a prophet, best inspect your own house, or acknowledge that others will find it relevant. If you dig, Bricker has one child, a daughter. Ibbitson appears to have no children. I cannot say why, of course, and it would be unfair to assume a selfish choice.

But whatever the reason, it is undeniably true that as a result they have less investment in the future than people with children. (Since you ask, I have five children. I am part of the solution, not part of the problem.) Maybe this is why finding a solution isn’t very important to them.

The book has many annoying inaccuracies that seem to be endemic among this type of popular writing, where editors appear to be permanently out to lunch.

It is not true that the nursery rhyme “Ring Around the Rosie” refers to the Black Death. The authors offer a half-page so parsing the rhyme, but that’s an urban legend—the rhyme first appeared around 1800. (Even Snopes, the left-wing political hack site notorious for lying propaganda, is correct on this, probably because there is no political element).

The word “dowry” only refers to payments made to the groom’s family; similar payments made to the bride’s family are “bride price.” The G.I. Bill did not create the American interstate highway system. The term is “cleft palate,” not “cleft palette.”

India’s economic stagnation for decades after independence was not due to “protective tariffs;” it was, as everybody who is not a Marxist admits, due to socialism, exacerbated by refusal of outside capital, along with the Permit Raj. (Tariffs make perfect sense for many developing countries that rely on import substitution to grow their economies; both the Britain and the United States used them extremely successfully.)

The fifteenth-century Portuguese caravel was not based on Muslim technology. The wave of migrants into Europe that peaked (maybe) around 2016 was economic, not because of war, and not a single person in Europe believes what the authors repeatedly claim, that most of those people will return to their countries of origin soon. Or ever.

Sloppiness of this type makes the reader wonder about the other, more critical, factual claims in the book.

So that’s Empty Planet. All of it could have been said in twenty or thirty pages. On the surface it’s a pat story, though one without a happy ending. That’s not for the authors’ lack of trying to be happy. Normative judgments abound, all of them oddly in tension with the gloomy top-level attitude of the book toward the problem of under-population.

Thus, the authors assume that large populations are necessarily terrible for anyone who lives there; adjectives such as “miserable” abound for any people born in a high birth-rate country. Not for them any acknowledgement of Angus Deaton’s point in The Great Escape that people in poor countries are generally very happy.

All population control is referred to with adjectives such as “beneficent.” We are didactically instructed that “Sex education and birth control [are] good things in and of themselves.” And in what may be the single most clueless paragraph in a book chock full of them, the authors offer this:

“Small families are, in all sorts of ways, wonderful things. Parents can devote more time and resources to raising—indeed, cossetting—the child. Children are likely to be raised with the positive role models of a working father and working mother. Such families reflect a society in which women stand equally, or at least near equally, with men in the home and the workplace. Women workers also help to mitigate the labor shortages produced by smaller workforces that result from too few babies. It isn’t going too far to say that small families are synonymous with enlightened, advanced societies.”

Given that the entire point of the book is that small families are a disaster for humanity, even though they try to deflect this obvious conclusion by unpersuasive and unsupported claims such as, “Population decline isn’t a good or a bad thing,” this type of thing suggests, to be charitable, cognitive dissonance.

Not to mention that cosseting children is not a good goal, although it’s not surprising that two people with one child between them think so, and that sending more women to work outside the home when sending women to such work is part of the problem seems, um, counter-intuitive. But as we will see, this paragraph gives us a clue to what is really driving human population collapse.

Let’s try to figure out what’s really going on, because despite seeming to be so, the authors’ story is not complete. If you look at the story from another angle, not the one of received wisdom, strange unexplained lacunae appear within the text.

The fertility rate in the United States and Britain begin to drop in the early 1800s, but only at the end of the 1800s on the Continent, even though urbanization came sooner in the latter, and the United States was almost all agricultural in the early 1800s. “In France, oddly, fertility declines were already underway by the late 1700s. No one is sure why. . . .” “Fertility rates appear to have increased in France and Belgium during the Second World War, even though both countries were under German occupation or control and supplies such as food and coal were increasingly scarce.”

Some countries that are largely poor, uneducated, and not urbanized (Brazil, Mexico, Uruguay) have extremely low fertility rates, while other, very similar-seeming countries still have high rates (Paraguay, Honduras, Guatemala). Uneducated Brazilian favela dwellers, normally the type of people who have lots of children, have experienced a big drop in fertility.

And on, and on, strange tidbits that jut out from the authors’ narrative, not fitting into the just-so story of urbanization followed by an inevitable and necessary choice to stop having children.

What could explain all these facts? The authors certainly don’t know. But I do. What brings together all these seeming outrider facts, and in the darkness binds them, is the inevitable human tendency toward selfish self-interest. Once this was universally recognized as vice, but it has always been recognized as a large part of what drives human beings unless we struggle against it.

The creation of virtue, through self-discipline, self-control, and, in Christian thinking, caring for others at our own expense, aiming at true freedom and the common good, was once the ideal.

Virtue helped control our baser impulses, and was the goal toward which a good and well-formed person was expected to strive and to lead others. It was, and is, the opposite of “living as one likes,” of the quest for supposed emancipation.

Having children is among the least selfish and most self-sacrificing things a woman, and to a lesser extent a man, can do; thus, when being selfish and self-centered both become exalted, we have fewer children. It is not a mystery.

How did we get here? As the result of two late-eighteenth-century developments.

The first, the fruit of the Scientific Revolution and the Industrial Revolution, is wealth. I have pondered whether a rich society can ever stay a virtuous society, and population decline is merely a subset of this question.

The second, the fruit of the Enlightenment (which had nothing to do with the Scientific Revolution or the Industrial Revolution), is the exaltation of individual autonomy, of self-actualization as the goal of human existence.

The problem with urbanization and its impact on birth rates, especially in the West, is not something inherent to urbanization, but that city dwellers are more wealthy (or at least exposed to wealth) and have, in practice, fallen prey more easily to Enlightenment ideas.

Either of these anti-virtue developments can crash fertility by itself. Combined, they are lethal to human progress. For example, a rich society, such as Venice in the 1600s, can never undergo the Enlightenment, but wealth alone will lead to depopulation, as virtue fades and pursuit of self becomes exalted.

And a poor and not urbanized society, such as late 1700s France or early 1800s America, can experience an ideological erosion of virtue solely through embracing Enlightenment principles. Or, to take a more modern example, the South American countries with high rates of fertility are those that are still strongly Christian, and hew to the Christian virtues.

The authors themselves note this correlation, but gloss over the implications. Similarly, poor Brazilians are not converted to the gospel of self directly by Rousseau and Locke, or by wealth, both of which they totally lack, but indirectly by both—by obsessive watching of telenovelas, the plots of which, as the authors note, “involve smaller families, empowered women, rampant consumerism, and complicated romantic and family relationships.”

For a final set of proofs, it is obvious from Empty Planet’s own statistics, though apparently not obvious to the authors themselves, that as the material blessings of the West finally spread around the world, fertility rates drop in tandem with adoption of the West’s techniques for acquiring wealth, further exacerbated when countries adopt Enlightenment values.

And to the extent the country’s elite push back against Enlightenment values, such as in Hungary and Russia, some progress can be made in increasing birth rates. Similarly, when a country’s people experiences shared challenges, social pressure against atomized Enlightenment individual autonomy can increase greatly, resulting in more children.

Such was apparently the case in wartime Belgium and France. It is also why Jews in Israel, alone among advanced economies, have a birthrate far in excess of replacement, even if you exclude the Orthodox. They value something beyond their own immediate, short-term desires, which counterbalances the natural human tendency towards vice.

We can now explain what the authors could not. The real, core reason for population decline is that children reduce autonomy and limit the worship of self. Children reduce autonomy even more for women than men, as a biological reality, so as women are culturally indoctrinated that they must have autonomy, they choose to have fewer children. (Men also want more autonomy, of course; that is why men support legal abortion more than women).

True, women don’t really get freedom as a result; for the most part, they get the opportunity to join the rat race for more consumer goods, and as is easy to demonstrate, they are no happier as a result. Probably most are far less happy, and very often, if not nearly always, regret having not had children, or more children.

Modern societal structures make this worse. To take a bitter, if funny, example, eating dinner with a group of young couples in Brussels, who between the twelve of them have two children, the authors note, “Most of the men are students or artists, while the women work and pay the rent.”

When men won’t fulfill their proper role as breadwinner and protector, it’s no wonder that women find bearing and raising children less attractive, totally aside from their own personal desire for autonomy.

And, finally, back to consumerism, the belief among both men and women that both they and their children must have the latest and mostest consumer goods, and that if something has to give to make that possible, it should be bearing children, is yet another manifestation of the cult of self.

The problem of declining population is fatal for any progress for the human race, so, naturally, given my desire to organically remake human society to flourish, expand, and accomplish, it’s necessary to solve this problem. (Not just for me, of course—any political program must deal with the underpopulation bomb).

I don’t think this is a narrowly resolvable problem—that is, there is no technical solution that does not also involve remolding human society, or at least some human societies. Certainly certain structural measures can and should immediately be taken in any well-run society.

Economic incentives are part of it, including cash payments to mothers of children, increasing by number of children, and increasing to the extent they stay home to take care of the children. Societies where women are expected to both do all the work of raising children, but are also required to earn money, notably Japan, Korea, and Italy, have among the lowest birth rates. Cash isn’t an adequate substitute for family frameworks, but it can help at the margin. Perhaps more, if enough cash is devoted to it.

Hungary, for example, yesterday announced a massive package of such incentives, including that women who have borne and raised four or more children are permanently exempt from all income tax. There should also be an enforced absolute ban on abortion in all circumstances, as well as on no-fault divorce (and the party at fault in a divorce should face severe financial penalties).

Other structural incentives for women to bear and raise children should similarly be put into place. Those are not only cash-based—for example, the Hungarian initiative also raises the social credit, as it were, of child-bearing and child-rearing. A woman who is called “breeder” by her friends when she says she wants a second or third child is less likely to do so than one who knows she will instead be admired and envied by both friends and strangers.

But all technical structural measures are completely inadequate without genuine societal change. You have to create a feedback loop. That’s how we got here, after all—more atomization leads to more atomization. Under the right circumstances, more virtue can lead to more virtue. It seems to me that the only hope for this is a societal rework, which, not coincidentally, is precisely what I am pushing.

The problem is that my end-state doesn’t comport with inherently selfish human desires. Thus, a feedback loop is harder to create and maintain. It probably requires some external goal for a society, combined with an outward-looking optimism that cannot be artificially created or maintained, but must be a groundswell within society, beginning with a virtuous and self-sacrificing ruling class (no points for guessing if that’s what we have now).

I suspect the only way forward is to provide such as societal goal that supersedes selfishness, while permanently ending the failed Enlightenment experiment on every level, and creating a new program that, in many ways, resembles earlier Western structures.

Even so, I am not certain it is possible to create an advanced, wealthy, urban society, not dedicated to extreme personal autonomy, with a high birth rate. But let’s say it is, and we can get there, and global population continues to expand, or rebounds, to more than current projections.

Considerable increases in current human population, maybe to fifteen or twenty billion, probably would be good for humanity overall. True, large populations can be challenging, and can, in certain circumstances, result in massive problems. Some of those circumstances are physical—it would be very difficult to have 100 million people live within 50 miles of the Arctic Circle.

But most of those circumstances are culture—when you have an inferior culture, it makes it much harder to provide for everyone. The converse, though, is that if you change your culture, your opportunities expand. (Nor should we forget that England created the modern world when her population, at the time of Malthus, was nine million in a world population of a billion, so small numbers can do great things, and culture is everything).

I am a big believer in, to use Charles Mann’s words, the ability of Wizardry to provide solutions to challenges such as increasing population. If that is true, an increasing population with many young people is a dynamic population, and as long as global culture is not deficient, but rather contains much excellence, then having not an empty planet, but a filled planet, is highly desirable.

Therefore, I am not as pessimistic as Bricker and Ibbitson. But we will all be long dead before we find out who is right, so all we can do is try to lay the groundwork for our children, and their children—and to make sure all those people exist.

Charles is a business owner and operator, in manufacturing, and a recovering big firm M&A lawyer. He runs the blog, The Worthy House.

The photo shows, “The School Walk,” by Albert Anker, painted in 1872.

The Soviet Search For Immortality

Given the rumors, Russians often wish all those theories about our super-soldiers and X-Men skeletons were true. Alas, the Soviet Union only went as far as trying to make immortal politicians (not as cool – but still cool, right?)

Not long before the death of Vladimir Lenin in 1924, a clandestine society emerged in Russia. Its members would conspire to meet in safe houses where they summoned volunteers to take part in blood transfusions. Creepy, right? You may be forgiven for thinking this was a sect or a religious cult, but in fact, the organization was run by a very sane Bolshevik higher-up, Alexander Bogdanov (real name Malinovsky), close Lenin ally, co-founder of the party and noted scientist behind the Socialist Institute.

“The great visionary”, as he was called by followers, was trying to unlock the secret to immortality.

Bram Stoker’s ‘Dracula’ had found great favor with readers in the Russian Empire, including Nicholas II himself. This fascination carried over into Socialist times. The meanings of blood and sacrifice enjoyed mystical fervor in a country that had just lost two million people in a war the likes of which the world had never seen in scale or efficiency of brutality.

“Why couldn’t they just resurrect him?”, wrote many in army circles about the 1924 demise of Vladimir Lenin. The idea that a figure of such colossal stature could die was unfathomable.

Lenin appeared to have been worn down by stress, exhaustion and malnutrition – all leading to a whole bouquet of symptoms afflicting nearly every old-school ruling class Bolshevik barely in his mid-thirties. They haven’t even had time to properly start ‘emancipating the world from capitalist tyranny’. Something had to be done.

It is no secret that Russia at the dawn of the Bolsheviks was a highly experimental country. No stone was left unturned in the search for the perfect Russian – including the famous sex reforms.

Given blood’s mystical allure, some scientists of the time also theorized that the person’s entire personality, soul and immune system were contained in their blood.

Bogdanov was such a scientist. Not only that – he was a polymath and an avid stargazer with a deep fascination for Mars, which he envisioned as a sort of socialist utopian society of blood brothers. These ideas laid the foundations for his novel, ‘The Red Star’, about a scientist who travels to the Red Planet, and finds out that the Communists there had almost attained immortality, all thanks to this culture of blood.

Lenin was disappointed with Bogdanov’s preoccupation with fantasy and sci-fi, leading to a rift between the two, Lenin believing that Bogdanov was making people chase foolish dreams instead of focusing on the work of forging the Revolution. But Bogdanov was too useful at the time, being the second figure in the party – the man directed the Bolsheviks during Lenin’s exile.

Even so, their camaraderie could not have survived their differences: Lenin advocated for dialogue and cooperation, including participation in the Duma – Russia’s legislative body. Bogdanov wanted no part in it, leaning even further left than Lenin himself had.

Together with his friend, Leonid Krasin, Bogdanov set up a military wing under the RSDLP’s  Central Committee. Money from its expropriations would be distributed to the various organizations controlled by Lenin and Bogdanov. The latter was furious that more money seemed to be going to Lenin’s cause.

Bogdanov would soon be expelled from the Workers’ Party. The two were split on their interpretation of Marxism, and Lenin’s works had begun to reflect that, calling out Bogdanov for his “bourgeois” outlook. At that point, even Lenin’s family thought he could’ve taken it down a notch. But the Bolshevik was having none of it – even banning Bogdanov’s novels from being read in the household.

Bogdanov, on the other hand, thought of Lenin’s ideals as those of ‘absolute Marxism’ – “the bloodsucker of the Old World,” turning followers vampire, chief among them Lenin. Bogdanov had lost his party, his job and his credibility while exchanging literary jabs with people he considered his comrades.  

After the devastation of WWI, however, a glimmer of light had appeared: “science can do anything” was to be the mantra of the 1920s-30s.

Mikhail Bulgakov had then just published his brilliant piece of sci-fi satire – ‘A Dog’s Heart’, which talked about transferring a dog’s soul into a human subject, another telltale sign of the times. It became obvious that science was beginning to take inspiration from fiction. With Bogdanov as the main proponent.

Bogdanov cared not for what we know about blood today – from blood groups and the Rh blood system to a whole host of other factors. His science was fraught with danger, with him as the most frequent guinea pig.

The blood would be taken from patients, poured into a sterile container and mixed with an anti-clotting agent, before the transfusions took place. They would have to be fast as well, to prevent bacteria forming.

Bogdanov’s fan base grew as this borderline-mad experimentation began to show signs of progress: Bogdanov himself was said to have begun looking 5-10 years younger, while his wife’s gout also began showing signs of improvement. People couldn’t believe their eyes!

It wouldn’t take long before Stalin himself would be bitten by the science bug, leading him to call upon Bogdanov and his experimentation, even suggesting he join back with the party he was expelled from by his predecessor.

Stalin was certainly no Lenin, and believed he needed every edge if (when) the next World War was going to take place. No money was spared to find a military application for the transfusions.

The Institute for Blood Transfusion was set up in 1926 on the leader’s orders. Bogdanov becomes director. This fascination with the idea of blood brotherhood expressed in his Martian sci-fi novel would finally begin to bear fruit.

Tragically, the mad scientist and sci-fi Bolshevik had not had enough time to properly study the effects of his rejuvenation procedures. We had no idea about erythrocytes or plasma or any checks and practices in place today for a successful transfusion.

Bogdanov was very interested in whether a person’s entire immune defenses were also transferred through blood. It seemed that a young man suffering from tuberculosis was the perfect candidate to test that theory.

A liter of blood was exchanged between the patient and the ‘doctor’.

It didn’t help that Bogdanov had been comparing his own blood to that of Dracula – immune to human afflictions. That twelfth transfusion would become his last. In the space of three hours, both started to suffer a steady deterioration: fever, nausea, vomiting – all signs of a serious poisoning.

However, Bogdanov decided to keep the transfusion under wraps. On that excruciatingly painful day, he’d felt even worse than the poorly Kaldomasov – the tuberculosis sufferer. He refused treatment nonetheless in a vain attempt to understand what had happened.

Bogdanov’s kidneys gave out in 48 hours, resulting in death from a hemolytic reaction. His last words, according to Channel 1’s interview with close descendant and economist Vladimir Klebaner, had been “Do what must be done. We must fight to the end.” He passed on April 7, 1928, aged 54.

But what of the student? The 21-year-old had lived. The doctors couldn’t tell why, even after another last-minute transfusion had failed to save Bogdanov from death. It would later become apparent that this final procedure wasn’t the culprit (both he and Kaldomasov were type O) – but the 11 preceding ones had been, creating antibodies in Bogdanov to the degree that even the correct blood would have been rejected. That’s all we know.

Stalin was very angry. Having pledged tens of thousands of rubles toward Bogdanov’s blood institute, the Soviet leader began now to think that all scientists were charlatans and extortionists.

In the end, however, it was thanks to Bogdanov’s work that Soviet hematology got a much needed push forward.

The photo shows, “Ivan the Terrible and his son,” by Ilya Repin, painted in 1885.

Lenin: The Giant Mushroom

In 1991, just months before the collapse of the USSR, Soviet audiences witnessed a shocking scene on television program, Pyatoe Koleso (The Fifth Wheel). Two serious-looking men – Sergey Sholokhov, the host and his guest, an underground musician and writer introduced as “politician and actor,” Sergey Kurekhin were sitting in a studio discussing the October revolution of 1917.

Suddenly, Kurekhin offered a very interesting hypothesis – that Vladimir Lenin, the Bolshevik leader, was not a human being but a mushroom.

Kurekhin started with a rambling discourse on the nature of revolutions and his trip to Mexico where, in ancient temples, he had seen frescoes closely resembling the events of 1917. From there, he moved on to the author Carlos Castaneda who described the practices of Central American Indians of using psychotropic drinks prepared from certain types of cacti.

“Apart from cacti, Castaneda describes mushrooms as special products with a hallucinogenic effect,” Kurekhin continued and then quoted Lenin’s letter to leading Marxist Georgi Plekhanov: “Yesterday I ate many mushrooms and felt marvelously well”. Noting that Russia’s fly-agaric mushroom has hallucinogenic effects, Kurekhin assumed that Lenin was consuming these kinds of mushrooms and had some kind of psychedelic, mind-altering experience.

It was not only Lenin who dabbled in such fungi, but other Bolsheviks as well, Kurekhin claimed. “The October revolution was made by people who had been consuming hallucinogenic mushrooms for years,” he said with a poker face. “And Lenin’s personality was replaced with that of a mushroom because fly-agaric identity is far stronger than a human one.” Therefore, he concluded, Lenin became a mushroom himself.

After that sensational statement, the program went on for another 20 minutes, with Kurekhin and Sholokhov citing endless “evidence” of Lenin’s affinity for mushrooms, starting from his passion for collecting fungi and going so far as to compare a photo of an armored vehicle Lenin once posed on to fungal mycelium.

At some point, both couldn’t help but laugh after stating that the Soviet hammer and a sickle symbol was, in fact, combination of a mushroom and a mushroom picker’s knife. But even the laughter didn’t prevent thousands of people from taking the program seriously.

“Had Kurekhin been speaking of anyone else, his words would easily have been dismissed as a joke. But Lenin! How could one joke about Lenin? Especially on Soviet television,” Russian anthropologist Alexei Yurchak said to explain the gullibility of many Soviet viewers.. He emphasized that viewers didn’t necessarily believe that Lenin was a mushroom – but they treated Kurekhin as a serious researcher, calling the television and writing letters demanding that the station confirm or refute the idea of the Bolshevik leader being a fungus.

Sergei Sholokhov, who made the program together with Kurekhin, later said: “The day after the show aired, a delegation of old Bolsheviks went to our local Communist party boss who was in charge of ideology and demanded an answer – was Lenin a mushroom or not. She answered with a fierce ‘No!’ claiming that ‘a mammal cannot be a plant’.”

Both himself and Kurekhin were quite shocked by such an answer, Sholokhov notes. On the other hand, Sholokhov may have made the story up  – just like he and Kurekhin (who died in 1996) did with the TV show.

It was Kurekhin, a humorous hoaxer who came up with the idea. In the late 1980s and early 1990s the world of Soviet media was changing, and as journalists enjoyed more freedom, some of them were talking nonsense.

As Kurekhin’s widow Anastasia recalled, “Once we saw a TV show on the death of Sergey Yesenin (the Russian poet who committed suicide in 1925). The host built his “proof” that Yesenin had actually been killed on absolutely absurd arguments. They showed photos of the poet’s funeral and said: “Look, this man is looking this way and that man is looking the other way, so it means that Yesenin was killed.” Kurekhin saw it and said to Anastasia: “You know, you can prove anything using such “evidence”. And so he did.

Alexei Yurchak explains that the hoax and people’s reactions to it was a good illustration of how people, no matter where they live, tend to trust the media without checking facts. “If there’s something in the media, there must be something to it,” Yurchak wrote. Kurekhin’s provocation was a hilarious way to prove how easy it is to feed people with the most bizarre nonsense if you sound confident enough.

 

Oleg Yegerov writes for Russia Beyond, through whose courtesy this articles is provided.

The Very Idea Of Technology

Whenever people are trying to define the modern age, there’s an inevitable phrase that gets tossed around. We hear it all the time – “We are an age of technology.”

And when people are asked what this phrase means, they invariably generate a list – cars, televisions, space probes, computers, the microchip – all things that were mostly science fiction just a hundred years ago. How did we come so far, so quickly?

But are we technological because we have more gadgets than, say, the ancient Egyptians who, after all, did build the pyramids? But our culture is different from the ancient Egyptians. How so?

Our age is technological not because of gadgets, but because of the idea of technology. The gadgets are a mere by-product. The way we think is profoundly different from all previous human civilizations.

We perceive things in a systematic way. We like to build conceptual structures. We like to investigate and get at the root causes of things. We like to figure out how things work. We see nature, the earth, the universe, as a series of intersecting systems. And this difference is the result of technology.

Essentially, we are dealing with two Greek words: techne and logia. Techne means “art,” “craft,” or “handiwork.” But logia is more interesting. It means “account,” “word,” “description,” and even “story.”

It is the root of other important words in English, such as “logistics” and “logical.” And it even reaches into the spiritual realm, where “Logos” is intimately connected with the mystery of God in Christianity, where God (Logos) is made flesh in Jesus Christ.

Therefore, technology is not really about gadgets. The word actually means “a description of art,” or “a story of craft, handiwork.” Anything we create is technology. Be it the microchip, a film, a novel, an airplane, or a poem.

But this is only the first layer. We need to dig further. Why do we use a Greek word in the first place? This question lets us dig right down to the foundations.

The word is Greek because the idea is Greek. This is not to say that other cultures did not have technology; they certainly did; the Pyramids are certain proof of that, as are the Nascan lines in the desert.

However, we have already established that technology is not about gadgets, or objects that we create. It is a particular mind-set.

Technology is visualizing the result, or perhaps uncovering that which lies hidden within our imagination. It really is still about giving an account of art, about what we can do with our minds.

But how is all this Greek?

The idea of technology was given to us by one specific person – the Greek philosopher, Aristotle(384-322 BC).

At the age of twenty, Aristotle found himself in Athens, listening to the already famous Plato (428 B.C. to 348 B.C.).

But the pupil would become greater than the master. Interestingly enough, Aristotle too had a famous pupil – Alexander the Great. Aristotle certainly had the ability to transform the way people thought – down to the present.

It was Aristotle who stressed the need not only for science, but a conceptual understanding of science. It was not enough just to be able to do things, such as craftsmanship that was passed down from father-to-son in his own day, and in many parts of the world today.

It was important to understand how things were; how they functioned the way they did.

It was Aristotle who taught us to break down an object into its smallest part so we can understand how it is built and how it operates. Where would science be today without this insight – which we now take as common sense.

But before Aristotle, it was not common sense. The common sense before his time was to accept things the way they were, because the gods had made them that way, and who were we to question the will of the gods. This was the pre-technological mindset.

Aristotle, like Plato before him, taught that nature and human beings behave according to systems that can be recorded and then classified, and understood and then applied. These categories provided mental frameworks within which we could house our ideas.

Therefore, if nature is a system (and not mysterious and unknowable), then it can be understood. And if it can be understood, it can be controlled. And if it can be controlled, then we can avoid being its victims.

Our ability to classify, categorize, and explain – in short, our technology – is the invention of Aristotle. Before he came along, we were only groping in the dark – if we dared grope, that is.

 

The photo shows, “Cyclist Through the City” (“Ciclista attraverso la città”), by Fortunato Depero, painted in 1945.

Bertrand Russell: Preliminary Remarks

Bertrand Arthur William Russell was born on May 18, 1872 into a privileged family. His grandfather was Lord John Russell, who was the liberal Prime Minister of Great Britain and the first Earl Russell. Young Bertrand’s early life was traumatic. His mother died when he was two years old and he lost his father before the age of four.

He was then sent to live with his grandparents, Lord and Lady John Russell, but by the time he was six years old, his grandfather also died. Thereafter, his grandmother, who was a strict authoritarian and a very religious woman, raised him.

These early years were filled with prohibitions and rules, and his earliest desires were to free himself from such constraints. His lifelong denial of religion no doubt stems from this early experience. His initial education was at home, which was customary for children of his social class, and later he went to Trinity College, Cambridge, where he achieved first-class honors in mathematics and philosophy.

He graduated in 1894, and briefly took the position of attaché at the British Embassy in Paris. But he was soon back in England and became a fellow of Trinity College in 1895, just after his first marriage to Alys Pearsall Smith. A year later, in 1896, he published his first book, entitled German Social Democracy, which he wrote after a visit to Berlin.

Russell was interested in all aspects of the human condition, as is apparent from his wide-ranging contributions, and when the First World War broke out, he found himself voicing increasingly controversial political views. He became an active pacifist, which resulted in his dismissal from Trinity College in 1916, and two years later, his views led him even to prison. But he put his imprisonment to good use and wrote the Introduction to Mathematical Philosophy, which was published in 1919.

Since he had no longer had a teaching job, he began to make his living by giving lectures and by writing. His controversial views soon made him famous. In 1919, he visited the newly formed Soviet Union, where he met many of the famous personalities of the Russian Revolution, which he initially supported.

But the visit soured his view of the Socialist movement in Russia and he wrote a scathing attack that very year, entitled Theory and Practice of Bolshevism. By 1921, he had married his second wife, Dora Black, and began to be interested in education. With Dora he created and ran a progressive school and wrote On Education (1926) and a few later, Education and the Social Order (1932).

In 1931, he became the 3rd Earl of Russell, and five years later got a divorce and married his third wife, Patricia Spence in 1936. By this time, he was extremely interested in morality and wrote about the subject in his controversial book Marriage and Morals (1932).

He had moved to New York to teach at City College, but he was dismissed from this position because of his views on sexuality (he advocated a version of free love, where sex was not bound up with questions of morality). When Adolf Hitler came to power in Germany, Russell began to question his own pacifism and by 1939 had firmly rejected it, and campaigned hard for the overthrow of Nazism right to the end of the Second World War.

By 1944, he was back in England from the United States, and his teaching position at Trinity College was restored to him, and was granted the Order of Merit. He won the Nobel Prize for literature in 1950. During this time, he wrote several important books, such as, An Enquiry into Meaning and Truth (1940), Human Knowledge: Its Scopes and Limits (1948).

His best-known work from this time is History of Western Philosophy (1945). As well, he continued writing controversial pieces on social, moral and religious issues. Most of these were collected and published in 1957 as Why I Am Not A Christian.

From 1949 onwards, he was actively involved in advocating nuclear disarmament. In 1961, along with his fourth and final wife, Edith Finch, he was again put into prison for inciting civil disobedience to oppose nuclear warfare. He spent his final years in North Wales, actively writing to the very last. He died on February 2, 1970.

His range of interests took in the various spheres of human endeavor and thought, for not only was he engaged with mathematics, philosophy, science, logic and the theory of meaning, but he was deeply interested in political activism, feminism, education, nuclear disarmament, and he was a ceaseless opponent of communism. His ideas have greatly influenced the world we live in.

So pervasive is his influence that contemporary culture has seamlessly subsumed the ideas he introduced so that we no longer recognize his impact.

For example, his ideas have forever changed, on a fundamental level, the way philosophy is done, the way logic is dealt with, the way mathematics and science are understood, the view we hold of morality, marriage, the nuclear family, and even the various attempts to stop the spread of nuclear arms – all these concepts owe their beginnings to Russell.

At the very heart of Russell’s thought lies the concept, first elucidated in The Principles of Mathematics, that analysis can lead to truth. By analysis he means the breaking up of a complex expression or thought in order to get at its simpler components, which in turn will reveal the meaning or truth.

Thus, the method involves moving from the larger to the more specific, from the macro to the micro. Russell arrived at this process by suggesting that mathematics and natural languages derived from logic. He extended his approach and stated that the structure of logic could be a useful tool in helping us understand the human experience, which in turn would lead to the working out of disputes.

Thus, in A History of Western Philosophy he shows how the structure of logic is consistent with the way the world works, namely that reality itself is paralleled in logic.

Therefore, this blending of logic and the need to arrive at the truth of reality highlights the second important concern for Russell, namely, metaphysics. In fact, both logic and metaphysics unite and give philosophy its unique approach to uncover truth, which for Russell leads to the understanding of the universe and us. It is this concept that he explores fully in Our Knowledge of the External World.

Although logic is essential to Russell’s philosophy, it is not synonymous with it. Rather, philosophy is to be seen as a larger construct, which certainly begins with logic, but ends with mysticism. It is certainly true that Russell denied the authority of organized religion all his life and preferred to live a life outside prescribed dogmas.

Nevertheless, he recognized the essential mystery that surrounds life, both in its particular representation in the life of humankind and in the larger sphere, namely, in the life of the universe. It is precisely this mysticism that disallowed him an ultimate denial of God existence, and therefore Russell never called himself an atheist; rather he labeled himself an avowed agnostic, or someone who does not know, and cannot know, whether God exists or not.

Thus, in philosophy he found a quest far greater than that embodied by religion or science, and he described this process in Mysticism and Logic.

 

The photo shows, “New York Movie,” By Edward Hopper, painted in 1939.

Milk And The Milking Industry

I hate milk. I find many of the recipes in this book frankly loathsome, were I to try them, which I won’t. On the other hand, I like science and history (and ice cream). So despite my stomach churning at some of the recipes and descriptions, I actually enjoyed reading this book.

Milk begins with history—the history of milk and milk animals around the globe. Americans, of course, focus nearly exclusively on cows and cows’ milk, but Mendelson points out that on a global scale cows are a relatively recent and relatively uncommon source of milk and milk products.

She mixes this history with science—the very different composition of different types of milk, along with the difference in products that result both from different types of milk and from how that milk is treated, both with by culturing with microorganisms and by mechanical alteration. The result, of course, is a huge range of milk products, ranging from the simple (naturally cultured yogurt; simple cheeses) to the complex (modern milk as sold in the supermarket; aged cheeses; butter).

Milk then moves to recipes, grouped into those based on fresh milk (and cream); yogurt; cultured milk (and cream); butter and true buttermilk; and fresh cheeses (aged cheeses are beyond the scope of the book).

Mendelson offers various recipes in each grouping, interspersed with more history and science, typically woven around the recipe immediately at hand. This is a successful approach for engaging and educating the reader (even if, as I say, I find the most of these somewhere between not-appealing and nasty, with the exception of some sweetened items).

All of this is well written. Milk is an excellent book and I will be sure to use my additional knowledge to be even more of a bore and chore at cocktail parties. But for me Milk was primarily a thought-provoking book, and not really about milk, or food. Initially, my thought was sparked by Mendelson’s measured and even-handed approach to controversies such as “raw” (i.e., unpasteurized) milk, which is largely forbidden by regulation in the United States.

Mendelson notes that raw milk probably isn’t the wonder food that some think, but neither is it impossible to safely produce and sell raw milk, despite what government functionaries and their allies in the food and health establishment, the “experts,” are always telling us.

Mendelson also covers the analogous controversy over fat in milk and butter—that is, “experts” told us that milkfat was to be avoided on peril of our health and our lives, and now we are told that is false.

We are told, instead, that those “experts” wholly misunderstood and grossly simplified the actual chemistry of milk and that they knew nothing at all, despite their claims to the contrary, about how it actually affects the human body. We are now told that milkfat is good for cardiovascular health and keeps us thin, after literally decades of being told the opposite, and anyone who disagreed being considered some combination of demon and fool. Again, the “experts” keep cropping up.

What drives their wholly incorrect conclusions, and the demand for universal submission to them?

We all have personal familiarity with the costs of these wrongheaded directives. Some costs are merely reductions in personal utility, which seem unimportant, but are not nothing, even if they are not easily captured in statistics. For example, my grandfather spent decades being forbidden by his wife, for his own good, to eat both butter and eggs, which he loved, and instead being required to eat “healthy” margarine, which he hated.

As Mendelson points out (and as has become even more clear since this book was published in 2008), it turns out that all this, also, is entirely false. But my grandfather died before the supposedly certain science of experts was discredited, so his utility remained lowered.

These examples, taken from the relatively narrow area of milk products, are just one set of many examples in all areas of life of how we are constantly told that we must do something because “experts” say to do it.

But as Milk shows, “experts” have a miserable track record in their attempts to direct the lives of Americans, whenever they go beyond common sense (e.g., don’t drink clearly contaminated milk) and presume to tell us what we must do, usually despite basing their Moses-from-the-mountaintop recommendations on contradictory, minimal or zero evidence.

As a result, millions of people have died or suffered—solely because of what “experts” told us, frequently with the cooperation of officious ministers of the state, who adopt these recommendations and penalize or criminalize failure to comply. But why does all this happen, over and over again? Why don’t the “experts” learn to advocate public policy with humility and caution?

Examples beyond milk are legion. Sticking with food examples, the “experts” told us all that a low-fat diet was the way to go, for good health and long life. Now that’s considered false, and the obesity epidemic largely due to the carbohydrates we were urged to eat while avoiding fat. And last week the “experts” performed a 180-degree about-face on the topic of feeding peanuts to infants.

I’ve had five children in the past nine years, and we were cautioned with the direst of warnings to never, ever feed them peanuts until the age of three. It was presented as the Gospel truth that we must do this, or we would be terrible parents endangering the lives of our children. During the twenty years of this recommendation, peanut allergies increased by 500%, and peanut allergies are now the leading cause of food-related anaphylaxis and death in the United States.

Now we are told to immediately do the opposite, and feed small infants peanuts, in order to avoid the very thing created by the thing we were told to do earlier.

Why, you may ask, do “experts” continually issue edicts that direct Americans what they must do, or face penalties, and why do they never show any shame, much less face any consequences, when they are proven wrong? It seems to me that to answer that question we have to ask why people, in any walk of life, whether “experts” or not, advocate any particular public policy.

(By “public policy” I mean a course of action that is either strongly recommended, in that failure to follow it is said to be certain to have material deleterious consequences to a specific individual or to some larger segment of society, or a policy that is enforced by state coercion).

Five possible non-exclusive reasons occur to me. I think that every person advocating a public policy is driven by one or more of these reasons, and by nothing else (unless they are insane or using a Magic 8-Ball to choose advocacy positions). Experts are merely people who supposedly have more knowledge; they are subject to the same analysis of their reasons. Those reasons are, in no particular rank:

1) A detached, purely objective analysis of alternatives has led to a conclusion the advocate has concluded is best for society. Let’s call this the “philosopher-king” reason for public policy advocacy.

(We can ignore for current purposes whether one can accurately determine what is “best for society,” as well as distortions to and failures of objectivity such as confirmation bias and tribalism, together with logical fallacies such as appeal to authority, to which “experts” are particularly prone, but which don’t change that the reason for choosing a position is objective analysis).

2) Money. This can mean direct payments, in the sense of corruption. But it more typically means that the advocate will economically benefit if a particular public policy position is adopted. What I mean here is not public policy effects that lift everyone; that falls under #1. Rather, I mean individualized benefit—for example, job promotions, grant money from the government to the advocate, or even things like luxury travel to conferences relating to a public policy.

This also includes simple economic security, such as job security—ensuring continued employment that might otherwise be at risk. It also includes third-party benefit, such as that resulting from nepotism.

If you asked a random person on the street, this is the only one of the drivers here that would likely be named. But it is probably the least important, despite what economic determinists and Marxists tell us. Sure, everyone wants money, but I think it’s rarely the most important driver of why someone desires a particular public policy.

3) The desire to feel superior to other people. This is a mostly overlooked driver of a huge amount of human action. Human nature being what it is, we all want to feel superior to others, and even better, to be recognized by others as superior, and even better, to be publicly so recognized. (See, for example, C.S. Lewis’s famous metaphor of the “The Inner Ring”).

One way to achieve feeling superior is advocate a public policy and attribute a moral component to it, which necessarily implies that the advocate is superior and those who oppose him are morally deficient and therefore inferior. (Fame is part of the feeling of superiority—technically, it’s not the exact same thing, but for these purposes I think the desire for fame and the desire to feel superior can be lumped together.)

The desire for superiority can be narrow – Professor X may want to feel superior to Professor Y in his same small department. Or it can be broad—Person X may want to feel superior to vast swathes of the deplorables in society as a whole. The refrain “we’re doing this for the children” is perhaps the best indicator that the real reason behind a policy position is the desire to feel superior.

4) The desire to control and have power over other people. Again, this is a mostly overlooked driver of human action. It is highly pleasurable to most people to push others around, whether they admit it or not.

Bullying is the most commonly remarked upon manifestation of this tendency, but it occurs everywhere in human relations, and in political systems—see, e.g., Orwell’s depiction of Communism in Animal Farm. Pushing others around is often justified by the pusher as doing something “for their own good,” when it is really the psychological good of the advocate that is being advanced.

5) The desire for transcendence—for meaning in one’s life. This is often the most important reason anyone does anything, and public policy advocacy is no exception. The advocacy itself may provide the meaning—“I am doing something.” But the advocacy itself may be a second-order effect. That is, the advocacy itself does not provide transcendence, but a particular person may find transcendence through a larger frame, of which the advocacy is merely a manifestation.

For example, religious belief may dictate a specific public policy, such that advocating the policy is implementing the framework that gives the advocate’s life its meaning. A pro-life activist is not given transcendence simply by fighting against abortion, but because that is part of a larger framework giving his life meaning.

Religious transcendence is easy to understand and identify; the two things necessarily go together. Thus, the innate nature of the human desire for transcendence is best seen not in religion, but in religion substitutes—notably Communism, but that was (and is) only the progenitor of a wide range of mostly left-wing religion substitutes, including environmental extremism and certain brands of feminism.

As Chesterton did not say, but should have, “When man ceases to believe in God, he does not believe in nothing, he believes in anything.”

As can be seen from this, there is very rarely any such thing as purely disinterested advocacy of a public policy. If you listen to those who publicly and loudly advocate public policies, they would have you believe that #1 is the only possible reason they advocate any particular public policy.

In fact, numerous people in this media-centric age have made a living out of casting themselves as impartial philosopher-kings, advocating public policies for supposedly purely rational, disinterested reasons. So, any time a Neil deGrasse Tyson or Bill Nye pushes a public policy (usually left-wing, although that’s not germane to this discussion, but may be indicative of something, as I discuss below), they claim to be driven by pure objective reason, but they are in fact driven by some combination of these factors.

The trick is finding out which factors are dominant, and using that to determine whether the advocacy has any merit for society at large, since factors #2 through #5 are in essence inapplicable to or antithetical to society at large, such that if any combination of those dominate, the advocacy is necessarily defective and should be ignored (and the advocate held in public contempt and, preferably, punished by society).

Let’s take Bill Nye’s position on global warming. He likes to call himself the “Science Guy,” and he got his start teaching children scientific facts through clever demonstrations of science experiments in educational programs. More recently, though, he’s taken aggressive public stands on public policy issues, of which global warming is only one (others include pushing for abortion rights and endorsing Barack Obama for political office). Why has he done this?

One possibility is that he has analyzed these policies and decided they’re objectively correct, and the world can benefit from his thoughts, without any benefit to him. Maybe.

He refuses to state his public policy advocacy rationales with any specificity, other than the usual vacuous and false “all the experts say global warming is an existential threat and we must pay any cost, immediately, to address that threat,” and he maintains the usual refusal to debate or even acknowledge competing viewpoints. So it’s hard to tell if he has done an objective, internally consistent analysis at all, though there is no indication he has.

But even if he has done so and that’s a reason for his advocacy of a global warming alarmist position, it’s only one reason. With respect to the other four possible reasons,

(a) Nye may or may not get more money as a result of his advocacy, but he definitely risks no financial penalty, since all the platforms on which he appears are controlled by those on the Left, who agree with him, and he gets job security because he can cry “persecution” if he is denied any job;

(b) he most definitely gets to feel superior, and to be repeatedly lauded as such on numerous public platforms, while making and being applauded for denigrating comments about those who disagree with him;

(c) he most definitely gets to control and have power over other people, by the nature of being a recognized Important Person whose advocacy is relevant, and by the declared intent of his preferred policies being massive direct control over billions, including by direct mandate and by limiting their life choices by making energy more expensive;

and (d) he probably achieves meaning in his life by his advocacy, although this is hard to tell without more evidence from Nye himself, being largely internal. But it is common for the successful (especially atheists like Nye) to, in the twilight of their careers, seek for larger meaning and a way to feel like they “made a difference,” and so transcendence is likely a reason for advocacy in his case—perhaps the overriding reason.

Therefore, based on this analysis, we can conclude that Bill Nye’s advocacy demanding public policy changes in response to global warming is largely or wholly worthless, since it is largely or wholly based on rationales that do not apply to society as a whole, but merely advance Bill Nye’s personal interests.

The same analysis applies, actually, to nearly all global warming alarmists, but even more strongly so. One frequently hears global warming alarmists jeer nervously at those who oppose their analysis and prescriptions, with some variation of “why would the experts claim it’s a problem if it’s not?”

These four reasons are why. Massive amounts of money all around the globe flow only to those pushing global warming alarmism; penury and obloquy are the lot of any scientist who dares to suggest not merely that global warming is a myth, but who makes any suggestion that cost-benefit analysis should apply or that it is possible we don’t actually understand climate at all (see, e.g., Roger Pielke).

(This is exacerbated by climate science being the short bus of science; the truly gifted go into areas like physics and have more options for making a living). The superiority that oozes off alarmists is so thick it nearly assumes physical form. All the solutions of global warming alarmists involve massively increasing power over ordinary citizens, by both government and by the advocates of political action based on global warming alarmism (see, the common demand that people who disagree with global warming alarmism be put in prison, or in some cases, the public demand they be killed).

And, most of all, global warming alarmism is very clearly a substitute religion, providing transcendence to its advocates, together with all the indicia of a religion, from sins to redemption to priests to indulgences.

So, while it appears plausible to a neutral observer (say, me) that modifying the atmosphere could have deleterious effects, and an objective analysis with that as a starting point would be nice, we can conclude that the alarmist industry as it exists is not primarily, or even to a significant degree, driven by objective analysis, and almost wholly, or wholly, driven by motives personal to the advocates, who should be held in contempt.

A very few advocates for public policies to address global warming escape this analysis, notably Bjørn Lomborg, but they are few indeed (and the treatment of them by the alarmist industry merely reinforces the above analysis).

Now, not all examples of “experts” pushing public policy are as baldly self-interested as global warming alarmists; they are probably at the extreme range of scientific unreliability due to the accrual of several factors other than rational objectivity. For a less extreme case, let’s take proponents of not feeding children peanuts before the age of three. Probably, the advocates of that public policy were mostly driven by factor #1, objective analysis.

They were just wrong, and most likely fell into various forms of bias and distorted thinking that made their conclusions false. Money was probably not overly important (unlike in the drive for fat-free foods, which was corrupted by money from the sweetener lobby). The other factors may have been important, overall or in certain cases; it is hard to tell.

Certainly, none of the advocates who were so wrong, and killed children with their erroneous advocacy, felt any need to express sorrow or shame, much less face any kind of punishment. This suggests that the desire to feel superior to other and control them is relevant, because a normal personal would feel compelled to abase himself for his error and the harm he caused—but that would undercut the feeling of superiority and control, so it is absent in practice, unless compelled, which it never is for “experts.”

Similarly, this is not to deny that it is possible to go too far the other way. Sometimes it is possible to base public policy on objective analysis. Cranks who reject all scientific evidence, from those who link vaccines to autism to those who think crystals have healing power, are just as subject to factors other than objectivity.

For example, someone who won’t vaccinate his children is subject to failures in #1 (in that the costs to children from not getting vaccinated is greater than even the claimed benefit), and is driven largely by #3 (superiority) and #5 (transcendence).

And there are probably quite a few public policy positions that don’t attract lots of public attention, and are therefore more likely to be based on objective analysis and less biased by other factors (though one can feel superior to, and desire to control, a small group as well as a large one).

Finally, this overall problem, of defective reasons being the real driver behind public policy advocacy, is less of a problem with the reality-based community, that is, with conservatives.

Liberals are more prone to derive their personal sense of meaning from politics, which is one of the reasons they try to politicize all areas of life. If you don’t advocate any public policy, or are neutral on what public policy will be chosen, you do not receive the positive reinforcement yielded by these drivers.

You have to get your personal utility, and your meaning, somewhere else. Conservatives are more likely to not focus on advocating public policies, and when they do are philosophically generally less subject to the temptations of control and transcendence (though, perhaps, not less subject to superiority).

Nonetheless, all people should be subject to the same analysis whenever they advocate for any public policy. And I conclude that trusting “experts,” unless a clear-eyed evaluation of their actual reasons for their positions is first made and the result is totally clear, is a fool’s errand.

Charles is a business owner and operator, in manufacturing, and a recovering big firm M&A lawyer. He runs the blog, The Worthy House.
The photo shows, “The Cigar” by Peter Baumgartner, painted in the latter half of the 19th-century.