Population And Its Decline

Anybody who has been paying attention has long grasped the truth: under-population, not overpopulation, is our problem. This will soon be true on a global scale, it is already true in most of the developed world. Empty Planet explains why this is undeniably so.

Unfortunately, the explanation is shrouded in confusion and ideological distortion, so the authors are never able to provide a clear message. Instead, they offer rambling, contradictory bromides combined with dumb “solutions” until the reader throws his hands up in despair, as I did. But then I got a stiff drink, finished the book, and now am ready to tell you about it.

The authors, two Canadians, Darrell Bricker and John Ibbitson, offer an apparently complete story. Every part of the world is becoming more urbanized. Urbanization causes a drop in the fertility rate, for three reasons.

First, when off the farm, children are a cost center, rather than a profit center. Second, urbanized women choose to have fewer children. Third, urbanization means atomization of social life, such that the networks in which people were embedded, most of which exercised pressure to have children, disappear, and if replaced, are replaced by friends or co-workers who do not exercise the same pressure. “Family members encourage each other to have children, whereas non-kin don’t.”

These causes of population decline are exacerbated by two other factors not tied to urbanization—the worldwide decline of religious belief, and lower infant and child mortality, which means people don’t have children as insurance. And the end of the story is that when the fertility rate drops far enough, it is, in the modern world, permanent. It is the “fertility trap,” analogous to the well-known “Malthusian trap.”

Why do urbanized women choose to have fewer children (aside from the other two stated reasons, expense and less family pressure)? The authors cite the desire for a career; the desire for autonomy and empowerment; the desire to escape the control of men; and the desire for “crafting a personal narrative.”

All of these things the authors tie to “education,” or, in their unguarded moments and more accurately, “being socialized to have an education and a career.” That is, modernity leads to women choosing to have fewer children, often no children at all, and far fewer children than are necessary to replace the people we have now.

Why the fertility trap? It’s due to two totally separate causes. One is mechanical—if a society has fewer children, obviously there will then be fewer women to bear new children. But the other is social. When there are fewer children, “Employment patterns change, childcare and schools are reduced, and there is a shift from a family/child oriented society to an individualistic society, with children part of individual fulfilment and well-being.”

In other words, it’s not a trap, it’s a societal choice. Interestingly, according to the authors, drops in the fertility rate, and therefore the fertility trap, are not the result of legalized abortion and easy contraception, as can be seen from examples of fertility problems prior to the 1960s.

For example, the birth rate was briefly at less than replacement in much of the West prior to World War II, when contraception was much less common, and abortion very much rarer (it is a total myth that illegal abortion was widespread prior to the modern era, at least in the West).

But abortion and contraception certainly contribute to the fertility trap. That is, it is societal factors that cause the fertility rate to drop, but all else being equal, the easier it is to prevent (or kill) children, the harder it is to climb back up. In any case, the result is the same—fewer people, getting fewer.

Empty Planet then sequentially examines Europe, Asia, Africa, and South America. There is a great deal of annoying repetition. Nonetheless, there is also much interesting data, all in support of the basic point—population everywhere is going to go down, soon and fast. True, the United Nations predicts that global population will top out at eleven billion around 2100, and then decline.

The authors instead think, and make a compelling case that, the United Nations overstates fertility in the twenty-first century. The authors say, and do a good job demonstrating why, population will top out at nine billion by around 2050 (it is seven billion now) and then decline. Some declines will be precipitous and startling—China, currently at 1.4 billion but deep into the fertility trap, will have 560 million people by the end of the century.

Strangely, the authors do not calculate global population estimates around, say, 2150, but eyeballing the numbers, it appears they will be around two or three billion, maybe less—and heading downward, fast.

Bricker and Ibbitson are not kind to overpopulation doomsayers. They note how completely wrong those of the 1960s and 1970s, such as the infamous Paul Ehrlich, have been proven. (Charles Mann does it better in his excellent The Wizard and the Prophet).

Bizarrely, Ehrlich is unrepentant, to a degree that suggests he is unhinged; the authors quote him as saying in 2015, without any reasoning, “My language would be even more apocalyptic today,” and analogizing children to garbage.

They don’t believe modern doomsayers are any more correct. Most just have no factual basis for their claims, which are basically just anti-human claims of a religious nature, and the authors even dare to note the obvious fact that the United Nations, a device primarily used to extract money from the successful economies of the world and give it to the unsuccessful, has a vested interest in exaggerating the problems of the backward parts of the world.

So what problems result from an aging and then declining global population? Economic stagnation is what the authors focus on. This is driven by less consumer demand, but also, less visibly but more importantly, by less dynamism.

Old people are takers, not makers. Moreover, they don’t do anything useful for driving society forward, let’s be frank. Not that the authors are frank; they skip by the dynamism problem without much comment, though at least they acknowledge it. But the reality is that for human flourishing, the dynamism of the young is everything, and far more important than consumer demand.

One just has to think of any positive accomplishment that has changed the world, in science, art, exploration, or anything else. In excess of ninety percent of such accomplishments have been made by people under thirty-five. (Actually, by men under thirty-five, for reasons which are probably mostly biological, but that is another discussion).

The simple reality is that it is the young who accomplish and the old who do not. And when you have no young people, you have no accomplishments. Our future, on the current arc, is being the Eloi; hopefully there will be no Morlocks.

Governments from Germany to Iran recognize this problem. The authors give numerous examples, all failures, of trying to resolve the problem by, in effect, begging and paying women to have children. Even here, the authors feel obliged to tell us “The idea of governments telling women they should have more babies for the sake of the nation seems to us repugnant.”

We are not told why that should be so, probably because it is obviously false, but regardless, it is clear that a modern government merely instructing or propagandizing women isn’t going to do the trick.

What is the authors’ solution, then? They don’t have one. Well, they have a short-term one, or claim to. Much of the back half of the book is taken up with endless variations on demanding that the West admit massive amounts of Third World immigrants.

The claimed reason for this is necessity—without immigration, Europe and North America will not have enough taxpayers to support the old in the style they desire. They realize the disaster that’s befallen Europe by admitting alien immigrants with nothing but their two hands. (They claim to reject the Swedish “humanitarian” model. But all their soaring language of untethered and unexplained moral duty implicitly endorses the humanitarian model).

Instead, they recommend the Canadian system to America, where only the cream of the crop, educated and with job skills, is admitted—but we must, must, must immediately admit no fewer than 3.5 million such immigrants every year.

And, of course, they fail to point out that the cream of the crop is by definition a tiny percentage of the overall amount of immigrants, so how exactly we are going to welcome only these worthwhile immigrants is not clear, especially if other countries are competing for them.

Nor do the authors point out that at best, this is a short-term solution—if every country in the world will soon have a less-than-replacement birth rate, emigration will soon enough become rare, so no amount of competition will attract enough people.

Therefore, their “solution” is no solution at all, and beyond this, Brickell and Ibbitson have nothing to offer, except muttering about how it’ll be nice to have a cleaner planet when there are no people to enjoy the clean planet.

I note that the authors do not tell us how many children they have, which seems highly relevant. If you are going to be a prophet, best inspect your own house, or acknowledge that others will find it relevant. If you dig, Bricker has one child, a daughter. Ibbitson appears to have no children. I cannot say why, of course, and it would be unfair to assume a selfish choice.

But whatever the reason, it is undeniably true that as a result they have less investment in the future than people with children. (Since you ask, I have five children. I am part of the solution, not part of the problem.) Maybe this is why finding a solution isn’t very important to them.

The book has many annoying inaccuracies that seem to be endemic among this type of popular writing, where editors appear to be permanently out to lunch.

It is not true that the nursery rhyme “Ring Around the Rosie” refers to the Black Death. The authors offer a half-page so parsing the rhyme, but that’s an urban legend—the rhyme first appeared around 1800. (Even Snopes, the left-wing political hack site notorious for lying propaganda, is correct on this, probably because there is no political element).

The word “dowry” only refers to payments made to the groom’s family; similar payments made to the bride’s family are “bride price.” The G.I. Bill did not create the American interstate highway system. The term is “cleft palate,” not “cleft palette.”

India’s economic stagnation for decades after independence was not due to “protective tariffs;” it was, as everybody who is not a Marxist admits, due to socialism, exacerbated by refusal of outside capital, along with the Permit Raj. (Tariffs make perfect sense for many developing countries that rely on import substitution to grow their economies; both the Britain and the United States used them extremely successfully.)

The fifteenth-century Portuguese caravel was not based on Muslim technology. The wave of migrants into Europe that peaked (maybe) around 2016 was economic, not because of war, and not a single person in Europe believes what the authors repeatedly claim, that most of those people will return to their countries of origin soon. Or ever.

Sloppiness of this type makes the reader wonder about the other, more critical, factual claims in the book.

So that’s Empty Planet. All of it could have been said in twenty or thirty pages. On the surface it’s a pat story, though one without a happy ending. That’s not for the authors’ lack of trying to be happy. Normative judgments abound, all of them oddly in tension with the gloomy top-level attitude of the book toward the problem of under-population.

Thus, the authors assume that large populations are necessarily terrible for anyone who lives there; adjectives such as “miserable” abound for any people born in a high birth-rate country. Not for them any acknowledgement of Angus Deaton’s point in The Great Escape that people in poor countries are generally very happy.

All population control is referred to with adjectives such as “beneficent.” We are didactically instructed that “Sex education and birth control [are] good things in and of themselves.” And in what may be the single most clueless paragraph in a book chock full of them, the authors offer this:

“Small families are, in all sorts of ways, wonderful things. Parents can devote more time and resources to raising—indeed, cossetting—the child. Children are likely to be raised with the positive role models of a working father and working mother. Such families reflect a society in which women stand equally, or at least near equally, with men in the home and the workplace. Women workers also help to mitigate the labor shortages produced by smaller workforces that result from too few babies. It isn’t going too far to say that small families are synonymous with enlightened, advanced societies.”

Given that the entire point of the book is that small families are a disaster for humanity, even though they try to deflect this obvious conclusion by unpersuasive and unsupported claims such as, “Population decline isn’t a good or a bad thing,” this type of thing suggests, to be charitable, cognitive dissonance.

Not to mention that cosseting children is not a good goal, although it’s not surprising that two people with one child between them think so, and that sending more women to work outside the home when sending women to such work is part of the problem seems, um, counter-intuitive. But as we will see, this paragraph gives us a clue to what is really driving human population collapse.

Let’s try to figure out what’s really going on, because despite seeming to be so, the authors’ story is not complete. If you look at the story from another angle, not the one of received wisdom, strange unexplained lacunae appear within the text.

The fertility rate in the United States and Britain begin to drop in the early 1800s, but only at the end of the 1800s on the Continent, even though urbanization came sooner in the latter, and the United States was almost all agricultural in the early 1800s. “In France, oddly, fertility declines were already underway by the late 1700s. No one is sure why. . . .” “Fertility rates appear to have increased in France and Belgium during the Second World War, even though both countries were under German occupation or control and supplies such as food and coal were increasingly scarce.”

Some countries that are largely poor, uneducated, and not urbanized (Brazil, Mexico, Uruguay) have extremely low fertility rates, while other, very similar-seeming countries still have high rates (Paraguay, Honduras, Guatemala). Uneducated Brazilian favela dwellers, normally the type of people who have lots of children, have experienced a big drop in fertility.

And on, and on, strange tidbits that jut out from the authors’ narrative, not fitting into the just-so story of urbanization followed by an inevitable and necessary choice to stop having children.

What could explain all these facts? The authors certainly don’t know. But I do. What brings together all these seeming outrider facts, and in the darkness binds them, is the inevitable human tendency toward selfish self-interest. Once this was universally recognized as vice, but it has always been recognized as a large part of what drives human beings unless we struggle against it.

The creation of virtue, through self-discipline, self-control, and, in Christian thinking, caring for others at our own expense, aiming at true freedom and the common good, was once the ideal.

Virtue helped control our baser impulses, and was the goal toward which a good and well-formed person was expected to strive and to lead others. It was, and is, the opposite of “living as one likes,” of the quest for supposed emancipation.

Having children is among the least selfish and most self-sacrificing things a woman, and to a lesser extent a man, can do; thus, when being selfish and self-centered both become exalted, we have fewer children. It is not a mystery.

How did we get here? As the result of two late-eighteenth-century developments.

The first, the fruit of the Scientific Revolution and the Industrial Revolution, is wealth. I have pondered whether a rich society can ever stay a virtuous society, and population decline is merely a subset of this question.

The second, the fruit of the Enlightenment (which had nothing to do with the Scientific Revolution or the Industrial Revolution), is the exaltation of individual autonomy, of self-actualization as the goal of human existence.

The problem with urbanization and its impact on birth rates, especially in the West, is not something inherent to urbanization, but that city dwellers are more wealthy (or at least exposed to wealth) and have, in practice, fallen prey more easily to Enlightenment ideas.

Either of these anti-virtue developments can crash fertility by itself. Combined, they are lethal to human progress. For example, a rich society, such as Venice in the 1600s, can never undergo the Enlightenment, but wealth alone will lead to depopulation, as virtue fades and pursuit of self becomes exalted.

And a poor and not urbanized society, such as late 1700s France or early 1800s America, can experience an ideological erosion of virtue solely through embracing Enlightenment principles. Or, to take a more modern example, the South American countries with high rates of fertility are those that are still strongly Christian, and hew to the Christian virtues.

The authors themselves note this correlation, but gloss over the implications. Similarly, poor Brazilians are not converted to the gospel of self directly by Rousseau and Locke, or by wealth, both of which they totally lack, but indirectly by both—by obsessive watching of telenovelas, the plots of which, as the authors note, “involve smaller families, empowered women, rampant consumerism, and complicated romantic and family relationships.”

For a final set of proofs, it is obvious from Empty Planet’s own statistics, though apparently not obvious to the authors themselves, that as the material blessings of the West finally spread around the world, fertility rates drop in tandem with adoption of the West’s techniques for acquiring wealth, further exacerbated when countries adopt Enlightenment values.

And to the extent the country’s elite push back against Enlightenment values, such as in Hungary and Russia, some progress can be made in increasing birth rates. Similarly, when a country’s people experiences shared challenges, social pressure against atomized Enlightenment individual autonomy can increase greatly, resulting in more children.

Such was apparently the case in wartime Belgium and France. It is also why Jews in Israel, alone among advanced economies, have a birthrate far in excess of replacement, even if you exclude the Orthodox. They value something beyond their own immediate, short-term desires, which counterbalances the natural human tendency towards vice.

We can now explain what the authors could not. The real, core reason for population decline is that children reduce autonomy and limit the worship of self. Children reduce autonomy even more for women than men, as a biological reality, so as women are culturally indoctrinated that they must have autonomy, they choose to have fewer children. (Men also want more autonomy, of course; that is why men support legal abortion more than women).

True, women don’t really get freedom as a result; for the most part, they get the opportunity to join the rat race for more consumer goods, and as is easy to demonstrate, they are no happier as a result. Probably most are far less happy, and very often, if not nearly always, regret having not had children, or more children.

Modern societal structures make this worse. To take a bitter, if funny, example, eating dinner with a group of young couples in Brussels, who between the twelve of them have two children, the authors note, “Most of the men are students or artists, while the women work and pay the rent.”

When men won’t fulfill their proper role as breadwinner and protector, it’s no wonder that women find bearing and raising children less attractive, totally aside from their own personal desire for autonomy.

And, finally, back to consumerism, the belief among both men and women that both they and their children must have the latest and mostest consumer goods, and that if something has to give to make that possible, it should be bearing children, is yet another manifestation of the cult of self.

The problem of declining population is fatal for any progress for the human race, so, naturally, given my desire to organically remake human society to flourish, expand, and accomplish, it’s necessary to solve this problem. (Not just for me, of course—any political program must deal with the underpopulation bomb).

I don’t think this is a narrowly resolvable problem—that is, there is no technical solution that does not also involve remolding human society, or at least some human societies. Certainly certain structural measures can and should immediately be taken in any well-run society.

Economic incentives are part of it, including cash payments to mothers of children, increasing by number of children, and increasing to the extent they stay home to take care of the children. Societies where women are expected to both do all the work of raising children, but are also required to earn money, notably Japan, Korea, and Italy, have among the lowest birth rates. Cash isn’t an adequate substitute for family frameworks, but it can help at the margin. Perhaps more, if enough cash is devoted to it.

Hungary, for example, yesterday announced a massive package of such incentives, including that women who have borne and raised four or more children are permanently exempt from all income tax. There should also be an enforced absolute ban on abortion in all circumstances, as well as on no-fault divorce (and the party at fault in a divorce should face severe financial penalties).

Other structural incentives for women to bear and raise children should similarly be put into place. Those are not only cash-based—for example, the Hungarian initiative also raises the social credit, as it were, of child-bearing and child-rearing. A woman who is called “breeder” by her friends when she says she wants a second or third child is less likely to do so than one who knows she will instead be admired and envied by both friends and strangers.

But all technical structural measures are completely inadequate without genuine societal change. You have to create a feedback loop. That’s how we got here, after all—more atomization leads to more atomization. Under the right circumstances, more virtue can lead to more virtue. It seems to me that the only hope for this is a societal rework, which, not coincidentally, is precisely what I am pushing.

The problem is that my end-state doesn’t comport with inherently selfish human desires. Thus, a feedback loop is harder to create and maintain. It probably requires some external goal for a society, combined with an outward-looking optimism that cannot be artificially created or maintained, but must be a groundswell within society, beginning with a virtuous and self-sacrificing ruling class (no points for guessing if that’s what we have now).

I suspect the only way forward is to provide such as societal goal that supersedes selfishness, while permanently ending the failed Enlightenment experiment on every level, and creating a new program that, in many ways, resembles earlier Western structures.

Even so, I am not certain it is possible to create an advanced, wealthy, urban society, not dedicated to extreme personal autonomy, with a high birth rate. But let’s say it is, and we can get there, and global population continues to expand, or rebounds, to more than current projections.

Considerable increases in current human population, maybe to fifteen or twenty billion, probably would be good for humanity overall. True, large populations can be challenging, and can, in certain circumstances, result in massive problems. Some of those circumstances are physical—it would be very difficult to have 100 million people live within 50 miles of the Arctic Circle.

But most of those circumstances are culture—when you have an inferior culture, it makes it much harder to provide for everyone. The converse, though, is that if you change your culture, your opportunities expand. (Nor should we forget that England created the modern world when her population, at the time of Malthus, was nine million in a world population of a billion, so small numbers can do great things, and culture is everything).

I am a big believer in, to use Charles Mann’s words, the ability of Wizardry to provide solutions to challenges such as increasing population. If that is true, an increasing population with many young people is a dynamic population, and as long as global culture is not deficient, but rather contains much excellence, then having not an empty planet, but a filled planet, is highly desirable.

Therefore, I am not as pessimistic as Bricker and Ibbitson. But we will all be long dead before we find out who is right, so all we can do is try to lay the groundwork for our children, and their children—and to make sure all those people exist.

Charles is a business owner and operator, in manufacturing, and a recovering big firm M&A lawyer. He runs the blog, The Worthy House.

The photo shows, “The School Walk,” by Albert Anker, painted in 1872.

The Soviet Search For Immortality

Given the rumors, Russians often wish all those theories about our super-soldiers and X-Men skeletons were true. Alas, the Soviet Union only went as far as trying to make immortal politicians (not as cool – but still cool, right?)

Not long before the death of Vladimir Lenin in 1924, a clandestine society emerged in Russia. Its members would conspire to meet in safe houses where they summoned volunteers to take part in blood transfusions. Creepy, right? You may be forgiven for thinking this was a sect or a religious cult, but in fact, the organization was run by a very sane Bolshevik higher-up, Alexander Bogdanov (real name Malinovsky), close Lenin ally, co-founder of the party and noted scientist behind the Socialist Institute.

“The great visionary”, as he was called by followers, was trying to unlock the secret to immortality.

Bram Stoker’s ‘Dracula’ had found great favor with readers in the Russian Empire, including Nicholas II himself. This fascination carried over into Socialist times. The meanings of blood and sacrifice enjoyed mystical fervor in a country that had just lost two million people in a war the likes of which the world had never seen in scale or efficiency of brutality.

“Why couldn’t they just resurrect him?”, wrote many in army circles about the 1924 demise of Vladimir Lenin. The idea that a figure of such colossal stature could die was unfathomable.

Lenin appeared to have been worn down by stress, exhaustion and malnutrition – all leading to a whole bouquet of symptoms afflicting nearly every old-school ruling class Bolshevik barely in his mid-thirties. They haven’t even had time to properly start ‘emancipating the world from capitalist tyranny’. Something had to be done.

It is no secret that Russia at the dawn of the Bolsheviks was a highly experimental country. No stone was left unturned in the search for the perfect Russian – including the famous sex reforms.

Given blood’s mystical allure, some scientists of the time also theorized that the person’s entire personality, soul and immune system were contained in their blood.

Bogdanov was such a scientist. Not only that – he was a polymath and an avid stargazer with a deep fascination for Mars, which he envisioned as a sort of socialist utopian society of blood brothers. These ideas laid the foundations for his novel, ‘The Red Star’, about a scientist who travels to the Red Planet, and finds out that the Communists there had almost attained immortality, all thanks to this culture of blood.

Lenin was disappointed with Bogdanov’s preoccupation with fantasy and sci-fi, leading to a rift between the two, Lenin believing that Bogdanov was making people chase foolish dreams instead of focusing on the work of forging the Revolution. But Bogdanov was too useful at the time, being the second figure in the party – the man directed the Bolsheviks during Lenin’s exile.

Even so, their camaraderie could not have survived their differences: Lenin advocated for dialogue and cooperation, including participation in the Duma – Russia’s legislative body. Bogdanov wanted no part in it, leaning even further left than Lenin himself had.

Together with his friend, Leonid Krasin, Bogdanov set up a military wing under the RSDLP’s  Central Committee. Money from its expropriations would be distributed to the various organizations controlled by Lenin and Bogdanov. The latter was furious that more money seemed to be going to Lenin’s cause.

Bogdanov would soon be expelled from the Workers’ Party. The two were split on their interpretation of Marxism, and Lenin’s works had begun to reflect that, calling out Bogdanov for his “bourgeois” outlook. At that point, even Lenin’s family thought he could’ve taken it down a notch. But the Bolshevik was having none of it – even banning Bogdanov’s novels from being read in the household.

Bogdanov, on the other hand, thought of Lenin’s ideals as those of ‘absolute Marxism’ – “the bloodsucker of the Old World,” turning followers vampire, chief among them Lenin. Bogdanov had lost his party, his job and his credibility while exchanging literary jabs with people he considered his comrades.  

After the devastation of WWI, however, a glimmer of light had appeared: “science can do anything” was to be the mantra of the 1920s-30s.

Mikhail Bulgakov had then just published his brilliant piece of sci-fi satire – ‘A Dog’s Heart’, which talked about transferring a dog’s soul into a human subject, another telltale sign of the times. It became obvious that science was beginning to take inspiration from fiction. With Bogdanov as the main proponent.

Bogdanov cared not for what we know about blood today – from blood groups and the Rh blood system to a whole host of other factors. His science was fraught with danger, with him as the most frequent guinea pig.

The blood would be taken from patients, poured into a sterile container and mixed with an anti-clotting agent, before the transfusions took place. They would have to be fast as well, to prevent bacteria forming.

Bogdanov’s fan base grew as this borderline-mad experimentation began to show signs of progress: Bogdanov himself was said to have begun looking 5-10 years younger, while his wife’s gout also began showing signs of improvement. People couldn’t believe their eyes!

It wouldn’t take long before Stalin himself would be bitten by the science bug, leading him to call upon Bogdanov and his experimentation, even suggesting he join back with the party he was expelled from by his predecessor.

Stalin was certainly no Lenin, and believed he needed every edge if (when) the next World War was going to take place. No money was spared to find a military application for the transfusions.

The Institute for Blood Transfusion was set up in 1926 on the leader’s orders. Bogdanov becomes director. This fascination with the idea of blood brotherhood expressed in his Martian sci-fi novel would finally begin to bear fruit.

Tragically, the mad scientist and sci-fi Bolshevik had not had enough time to properly study the effects of his rejuvenation procedures. We had no idea about erythrocytes or plasma or any checks and practices in place today for a successful transfusion.

Bogdanov was very interested in whether a person’s entire immune defenses were also transferred through blood. It seemed that a young man suffering from tuberculosis was the perfect candidate to test that theory.

A liter of blood was exchanged between the patient and the ‘doctor’.

It didn’t help that Bogdanov had been comparing his own blood to that of Dracula – immune to human afflictions. That twelfth transfusion would become his last. In the space of three hours, both started to suffer a steady deterioration: fever, nausea, vomiting – all signs of a serious poisoning.

However, Bogdanov decided to keep the transfusion under wraps. On that excruciatingly painful day, he’d felt even worse than the poorly Kaldomasov – the tuberculosis sufferer. He refused treatment nonetheless in a vain attempt to understand what had happened.

Bogdanov’s kidneys gave out in 48 hours, resulting in death from a hemolytic reaction. His last words, according to Channel 1’s interview with close descendant and economist Vladimir Klebaner, had been “Do what must be done. We must fight to the end.” He passed on April 7, 1928, aged 54.

But what of the student? The 21-year-old had lived. The doctors couldn’t tell why, even after another last-minute transfusion had failed to save Bogdanov from death. It would later become apparent that this final procedure wasn’t the culprit (both he and Kaldomasov were type O) – but the 11 preceding ones had been, creating antibodies in Bogdanov to the degree that even the correct blood would have been rejected. That’s all we know.

Stalin was very angry. Having pledged tens of thousands of rubles toward Bogdanov’s blood institute, the Soviet leader began now to think that all scientists were charlatans and extortionists.

In the end, however, it was thanks to Bogdanov’s work that Soviet hematology got a much needed push forward.

The photo shows, “Ivan the Terrible and his son,” by Ilya Repin, painted in 1885.

Lenin: The Giant Mushroom

In 1991, just months before the collapse of the USSR, Soviet audiences witnessed a shocking scene on television program, Pyatoe Koleso (The Fifth Wheel). Two serious-looking men – Sergey Sholokhov, the host and his guest, an underground musician and writer introduced as “politician and actor,” Sergey Kurekhin were sitting in a studio discussing the October revolution of 1917.

Suddenly, Kurekhin offered a very interesting hypothesis – that Vladimir Lenin, the Bolshevik leader, was not a human being but a mushroom.

Kurekhin started with a rambling discourse on the nature of revolutions and his trip to Mexico where, in ancient temples, he had seen frescoes closely resembling the events of 1917. From there, he moved on to the author Carlos Castaneda who described the practices of Central American Indians of using psychotropic drinks prepared from certain types of cacti.

“Apart from cacti, Castaneda describes mushrooms as special products with a hallucinogenic effect,” Kurekhin continued and then quoted Lenin’s letter to leading Marxist Georgi Plekhanov: “Yesterday I ate many mushrooms and felt marvelously well”. Noting that Russia’s fly-agaric mushroom has hallucinogenic effects, Kurekhin assumed that Lenin was consuming these kinds of mushrooms and had some kind of psychedelic, mind-altering experience.

It was not only Lenin who dabbled in such fungi, but other Bolsheviks as well, Kurekhin claimed. “The October revolution was made by people who had been consuming hallucinogenic mushrooms for years,” he said with a poker face. “And Lenin’s personality was replaced with that of a mushroom because fly-agaric identity is far stronger than a human one.” Therefore, he concluded, Lenin became a mushroom himself.

After that sensational statement, the program went on for another 20 minutes, with Kurekhin and Sholokhov citing endless “evidence” of Lenin’s affinity for mushrooms, starting from his passion for collecting fungi and going so far as to compare a photo of an armored vehicle Lenin once posed on to fungal mycelium.

At some point, both couldn’t help but laugh after stating that the Soviet hammer and a sickle symbol was, in fact, combination of a mushroom and a mushroom picker’s knife. But even the laughter didn’t prevent thousands of people from taking the program seriously.

“Had Kurekhin been speaking of anyone else, his words would easily have been dismissed as a joke. But Lenin! How could one joke about Lenin? Especially on Soviet television,” Russian anthropologist Alexei Yurchak said to explain the gullibility of many Soviet viewers.. He emphasized that viewers didn’t necessarily believe that Lenin was a mushroom – but they treated Kurekhin as a serious researcher, calling the television and writing letters demanding that the station confirm or refute the idea of the Bolshevik leader being a fungus.

Sergei Sholokhov, who made the program together with Kurekhin, later said: “The day after the show aired, a delegation of old Bolsheviks went to our local Communist party boss who was in charge of ideology and demanded an answer – was Lenin a mushroom or not. She answered with a fierce ‘No!’ claiming that ‘a mammal cannot be a plant’.”

Both himself and Kurekhin were quite shocked by such an answer, Sholokhov notes. On the other hand, Sholokhov may have made the story up  – just like he and Kurekhin (who died in 1996) did with the TV show.

It was Kurekhin, a humorous hoaxer who came up with the idea. In the late 1980s and early 1990s the world of Soviet media was changing, and as journalists enjoyed more freedom, some of them were talking nonsense.

As Kurekhin’s widow Anastasia recalled, “Once we saw a TV show on the death of Sergey Yesenin (the Russian poet who committed suicide in 1925). The host built his “proof” that Yesenin had actually been killed on absolutely absurd arguments. They showed photos of the poet’s funeral and said: “Look, this man is looking this way and that man is looking the other way, so it means that Yesenin was killed.” Kurekhin saw it and said to Anastasia: “You know, you can prove anything using such “evidence”. And so he did.

Alexei Yurchak explains that the hoax and people’s reactions to it was a good illustration of how people, no matter where they live, tend to trust the media without checking facts. “If there’s something in the media, there must be something to it,” Yurchak wrote. Kurekhin’s provocation was a hilarious way to prove how easy it is to feed people with the most bizarre nonsense if you sound confident enough.

 

Oleg Yegerov writes for Russia Beyond, through whose courtesy this articles is provided.

The Very Idea Of Technology

Whenever people are trying to define the modern age, there’s an inevitable phrase that gets tossed around. We hear it all the time – “We are an age of technology.”

And when people are asked what this phrase means, they invariably generate a list – cars, televisions, space probes, computers, the microchip – all things that were mostly science fiction just a hundred years ago. How did we come so far, so quickly?

But are we technological because we have more gadgets than, say, the ancient Egyptians who, after all, did build the pyramids? But our culture is different from the ancient Egyptians. How so?

Our age is technological not because of gadgets, but because of the idea of technology. The gadgets are a mere by-product. The way we think is profoundly different from all previous human civilizations.

We perceive things in a systematic way. We like to build conceptual structures. We like to investigate and get at the root causes of things. We like to figure out how things work. We see nature, the earth, the universe, as a series of intersecting systems. And this difference is the result of technology.

Essentially, we are dealing with two Greek words: techne and logia. Techne means “art,” “craft,” or “handiwork.” But logia is more interesting. It means “account,” “word,” “description,” and even “story.”

It is the root of other important words in English, such as “logistics” and “logical.” And it even reaches into the spiritual realm, where “Logos” is intimately connected with the mystery of God in Christianity, where God (Logos) is made flesh in Jesus Christ.

Therefore, technology is not really about gadgets. The word actually means “a description of art,” or “a story of craft, handiwork.” Anything we create is technology. Be it the microchip, a film, a novel, an airplane, or a poem.

But this is only the first layer. We need to dig further. Why do we use a Greek word in the first place? This question lets us dig right down to the foundations.

The word is Greek because the idea is Greek. This is not to say that other cultures did not have technology; they certainly did; the Pyramids are certain proof of that, as are the Nascan lines in the desert.

However, we have already established that technology is not about gadgets, or objects that we create. It is a particular mind-set.

Technology is visualizing the result, or perhaps uncovering that which lies hidden within our imagination. It really is still about giving an account of art, about what we can do with our minds.

But how is all this Greek?

The idea of technology was given to us by one specific person – the Greek philosopher, Aristotle(384-322 BC).

At the age of twenty, Aristotle found himself in Athens, listening to the already famous Plato (428 B.C. to 348 B.C.).

But the pupil would become greater than the master. Interestingly enough, Aristotle too had a famous pupil – Alexander the Great. Aristotle certainly had the ability to transform the way people thought – down to the present.

It was Aristotle who stressed the need not only for science, but a conceptual understanding of science. It was not enough just to be able to do things, such as craftsmanship that was passed down from father-to-son in his own day, and in many parts of the world today.

It was important to understand how things were; how they functioned the way they did.

It was Aristotle who taught us to break down an object into its smallest part so we can understand how it is built and how it operates. Where would science be today without this insight – which we now take as common sense.

But before Aristotle, it was not common sense. The common sense before his time was to accept things the way they were, because the gods had made them that way, and who were we to question the will of the gods. This was the pre-technological mindset.

Aristotle, like Plato before him, taught that nature and human beings behave according to systems that can be recorded and then classified, and understood and then applied. These categories provided mental frameworks within which we could house our ideas.

Therefore, if nature is a system (and not mysterious and unknowable), then it can be understood. And if it can be understood, it can be controlled. And if it can be controlled, then we can avoid being its victims.

Our ability to classify, categorize, and explain – in short, our technology – is the invention of Aristotle. Before he came along, we were only groping in the dark – if we dared grope, that is.

 

The photo shows, “Cyclist Through the City” (“Ciclista attraverso la città”), by Fortunato Depero, painted in 1945.

Bertrand Russell: Preliminary Remarks

Bertrand Arthur William Russell was born on May 18, 1872 into a privileged family. His grandfather was Lord John Russell, who was the liberal Prime Minister of Great Britain and the first Earl Russell. Young Bertrand’s early life was traumatic. His mother died when he was two years old and he lost his father before the age of four.

He was then sent to live with his grandparents, Lord and Lady John Russell, but by the time he was six years old, his grandfather also died. Thereafter, his grandmother, who was a strict authoritarian and a very religious woman, raised him.

These early years were filled with prohibitions and rules, and his earliest desires were to free himself from such constraints. His lifelong denial of religion no doubt stems from this early experience. His initial education was at home, which was customary for children of his social class, and later he went to Trinity College, Cambridge, where he achieved first-class honors in mathematics and philosophy.

He graduated in 1894, and briefly took the position of attaché at the British Embassy in Paris. But he was soon back in England and became a fellow of Trinity College in 1895, just after his first marriage to Alys Pearsall Smith. A year later, in 1896, he published his first book, entitled German Social Democracy, which he wrote after a visit to Berlin.

Russell was interested in all aspects of the human condition, as is apparent from his wide-ranging contributions, and when the First World War broke out, he found himself voicing increasingly controversial political views. He became an active pacifist, which resulted in his dismissal from Trinity College in 1916, and two years later, his views led him even to prison. But he put his imprisonment to good use and wrote the Introduction to Mathematical Philosophy, which was published in 1919.

Since he had no longer had a teaching job, he began to make his living by giving lectures and by writing. His controversial views soon made him famous. In 1919, he visited the newly formed Soviet Union, where he met many of the famous personalities of the Russian Revolution, which he initially supported.

But the visit soured his view of the Socialist movement in Russia and he wrote a scathing attack that very year, entitled Theory and Practice of Bolshevism. By 1921, he had married his second wife, Dora Black, and began to be interested in education. With Dora he created and ran a progressive school and wrote On Education (1926) and a few later, Education and the Social Order (1932).

In 1931, he became the 3rd Earl of Russell, and five years later got a divorce and married his third wife, Patricia Spence in 1936. By this time, he was extremely interested in morality and wrote about the subject in his controversial book Marriage and Morals (1932).

He had moved to New York to teach at City College, but he was dismissed from this position because of his views on sexuality (he advocated a version of free love, where sex was not bound up with questions of morality). When Adolf Hitler came to power in Germany, Russell began to question his own pacifism and by 1939 had firmly rejected it, and campaigned hard for the overthrow of Nazism right to the end of the Second World War.

By 1944, he was back in England from the United States, and his teaching position at Trinity College was restored to him, and was granted the Order of Merit. He won the Nobel Prize for literature in 1950. During this time, he wrote several important books, such as, An Enquiry into Meaning and Truth (1940), Human Knowledge: Its Scopes and Limits (1948).

His best-known work from this time is History of Western Philosophy (1945). As well, he continued writing controversial pieces on social, moral and religious issues. Most of these were collected and published in 1957 as Why I Am Not A Christian.

From 1949 onwards, he was actively involved in advocating nuclear disarmament. In 1961, along with his fourth and final wife, Edith Finch, he was again put into prison for inciting civil disobedience to oppose nuclear warfare. He spent his final years in North Wales, actively writing to the very last. He died on February 2, 1970.

His range of interests took in the various spheres of human endeavor and thought, for not only was he engaged with mathematics, philosophy, science, logic and the theory of meaning, but he was deeply interested in political activism, feminism, education, nuclear disarmament, and he was a ceaseless opponent of communism. His ideas have greatly influenced the world we live in.

So pervasive is his influence that contemporary culture has seamlessly subsumed the ideas he introduced so that we no longer recognize his impact.

For example, his ideas have forever changed, on a fundamental level, the way philosophy is done, the way logic is dealt with, the way mathematics and science are understood, the view we hold of morality, marriage, the nuclear family, and even the various attempts to stop the spread of nuclear arms – all these concepts owe their beginnings to Russell.

At the very heart of Russell’s thought lies the concept, first elucidated in The Principles of Mathematics, that analysis can lead to truth. By analysis he means the breaking up of a complex expression or thought in order to get at its simpler components, which in turn will reveal the meaning or truth.

Thus, the method involves moving from the larger to the more specific, from the macro to the micro. Russell arrived at this process by suggesting that mathematics and natural languages derived from logic. He extended his approach and stated that the structure of logic could be a useful tool in helping us understand the human experience, which in turn would lead to the working out of disputes.

Thus, in A History of Western Philosophy he shows how the structure of logic is consistent with the way the world works, namely that reality itself is paralleled in logic.

Therefore, this blending of logic and the need to arrive at the truth of reality highlights the second important concern for Russell, namely, metaphysics. In fact, both logic and metaphysics unite and give philosophy its unique approach to uncover truth, which for Russell leads to the understanding of the universe and us. It is this concept that he explores fully in Our Knowledge of the External World.

Although logic is essential to Russell’s philosophy, it is not synonymous with it. Rather, philosophy is to be seen as a larger construct, which certainly begins with logic, but ends with mysticism. It is certainly true that Russell denied the authority of organized religion all his life and preferred to live a life outside prescribed dogmas.

Nevertheless, he recognized the essential mystery that surrounds life, both in its particular representation in the life of humankind and in the larger sphere, namely, in the life of the universe. It is precisely this mysticism that disallowed him an ultimate denial of God existence, and therefore Russell never called himself an atheist; rather he labeled himself an avowed agnostic, or someone who does not know, and cannot know, whether God exists or not.

Thus, in philosophy he found a quest far greater than that embodied by religion or science, and he described this process in Mysticism and Logic.

 

The photo shows, “New York Movie,” By Edward Hopper, painted in 1939.

Milk And The Milking Industry

I hate milk. I find many of the recipes in this book frankly loathsome, were I to try them, which I won’t. On the other hand, I like science and history (and ice cream). So despite my stomach churning at some of the recipes and descriptions, I actually enjoyed reading this book.

Milk begins with history—the history of milk and milk animals around the globe. Americans, of course, focus nearly exclusively on cows and cows’ milk, but Mendelson points out that on a global scale cows are a relatively recent and relatively uncommon source of milk and milk products.

She mixes this history with science—the very different composition of different types of milk, along with the difference in products that result both from different types of milk and from how that milk is treated, both with by culturing with microorganisms and by mechanical alteration. The result, of course, is a huge range of milk products, ranging from the simple (naturally cultured yogurt; simple cheeses) to the complex (modern milk as sold in the supermarket; aged cheeses; butter).

Milk then moves to recipes, grouped into those based on fresh milk (and cream); yogurt; cultured milk (and cream); butter and true buttermilk; and fresh cheeses (aged cheeses are beyond the scope of the book).

Mendelson offers various recipes in each grouping, interspersed with more history and science, typically woven around the recipe immediately at hand. This is a successful approach for engaging and educating the reader (even if, as I say, I find the most of these somewhere between not-appealing and nasty, with the exception of some sweetened items).

All of this is well written. Milk is an excellent book and I will be sure to use my additional knowledge to be even more of a bore and chore at cocktail parties. But for me Milk was primarily a thought-provoking book, and not really about milk, or food. Initially, my thought was sparked by Mendelson’s measured and even-handed approach to controversies such as “raw” (i.e., unpasteurized) milk, which is largely forbidden by regulation in the United States.

Mendelson notes that raw milk probably isn’t the wonder food that some think, but neither is it impossible to safely produce and sell raw milk, despite what government functionaries and their allies in the food and health establishment, the “experts,” are always telling us.

Mendelson also covers the analogous controversy over fat in milk and butter—that is, “experts” told us that milkfat was to be avoided on peril of our health and our lives, and now we are told that is false.

We are told, instead, that those “experts” wholly misunderstood and grossly simplified the actual chemistry of milk and that they knew nothing at all, despite their claims to the contrary, about how it actually affects the human body. We are now told that milkfat is good for cardiovascular health and keeps us thin, after literally decades of being told the opposite, and anyone who disagreed being considered some combination of demon and fool. Again, the “experts” keep cropping up.

What drives their wholly incorrect conclusions, and the demand for universal submission to them?

We all have personal familiarity with the costs of these wrongheaded directives. Some costs are merely reductions in personal utility, which seem unimportant, but are not nothing, even if they are not easily captured in statistics. For example, my grandfather spent decades being forbidden by his wife, for his own good, to eat both butter and eggs, which he loved, and instead being required to eat “healthy” margarine, which he hated.

As Mendelson points out (and as has become even more clear since this book was published in 2008), it turns out that all this, also, is entirely false. But my grandfather died before the supposedly certain science of experts was discredited, so his utility remained lowered.

These examples, taken from the relatively narrow area of milk products, are just one set of many examples in all areas of life of how we are constantly told that we must do something because “experts” say to do it.

But as Milk shows, “experts” have a miserable track record in their attempts to direct the lives of Americans, whenever they go beyond common sense (e.g., don’t drink clearly contaminated milk) and presume to tell us what we must do, usually despite basing their Moses-from-the-mountaintop recommendations on contradictory, minimal or zero evidence.

As a result, millions of people have died or suffered—solely because of what “experts” told us, frequently with the cooperation of officious ministers of the state, who adopt these recommendations and penalize or criminalize failure to comply. But why does all this happen, over and over again? Why don’t the “experts” learn to advocate public policy with humility and caution?

Examples beyond milk are legion. Sticking with food examples, the “experts” told us all that a low-fat diet was the way to go, for good health and long life. Now that’s considered false, and the obesity epidemic largely due to the carbohydrates we were urged to eat while avoiding fat. And last week the “experts” performed a 180-degree about-face on the topic of feeding peanuts to infants.

I’ve had five children in the past nine years, and we were cautioned with the direst of warnings to never, ever feed them peanuts until the age of three. It was presented as the Gospel truth that we must do this, or we would be terrible parents endangering the lives of our children. During the twenty years of this recommendation, peanut allergies increased by 500%, and peanut allergies are now the leading cause of food-related anaphylaxis and death in the United States.

Now we are told to immediately do the opposite, and feed small infants peanuts, in order to avoid the very thing created by the thing we were told to do earlier.

Why, you may ask, do “experts” continually issue edicts that direct Americans what they must do, or face penalties, and why do they never show any shame, much less face any consequences, when they are proven wrong? It seems to me that to answer that question we have to ask why people, in any walk of life, whether “experts” or not, advocate any particular public policy.

(By “public policy” I mean a course of action that is either strongly recommended, in that failure to follow it is said to be certain to have material deleterious consequences to a specific individual or to some larger segment of society, or a policy that is enforced by state coercion).

Five possible non-exclusive reasons occur to me. I think that every person advocating a public policy is driven by one or more of these reasons, and by nothing else (unless they are insane or using a Magic 8-Ball to choose advocacy positions). Experts are merely people who supposedly have more knowledge; they are subject to the same analysis of their reasons. Those reasons are, in no particular rank:

1) A detached, purely objective analysis of alternatives has led to a conclusion the advocate has concluded is best for society. Let’s call this the “philosopher-king” reason for public policy advocacy.

(We can ignore for current purposes whether one can accurately determine what is “best for society,” as well as distortions to and failures of objectivity such as confirmation bias and tribalism, together with logical fallacies such as appeal to authority, to which “experts” are particularly prone, but which don’t change that the reason for choosing a position is objective analysis).

2) Money. This can mean direct payments, in the sense of corruption. But it more typically means that the advocate will economically benefit if a particular public policy position is adopted. What I mean here is not public policy effects that lift everyone; that falls under #1. Rather, I mean individualized benefit—for example, job promotions, grant money from the government to the advocate, or even things like luxury travel to conferences relating to a public policy.

This also includes simple economic security, such as job security—ensuring continued employment that might otherwise be at risk. It also includes third-party benefit, such as that resulting from nepotism.

If you asked a random person on the street, this is the only one of the drivers here that would likely be named. But it is probably the least important, despite what economic determinists and Marxists tell us. Sure, everyone wants money, but I think it’s rarely the most important driver of why someone desires a particular public policy.

3) The desire to feel superior to other people. This is a mostly overlooked driver of a huge amount of human action. Human nature being what it is, we all want to feel superior to others, and even better, to be recognized by others as superior, and even better, to be publicly so recognized. (See, for example, C.S. Lewis’s famous metaphor of the “The Inner Ring”).

One way to achieve feeling superior is advocate a public policy and attribute a moral component to it, which necessarily implies that the advocate is superior and those who oppose him are morally deficient and therefore inferior. (Fame is part of the feeling of superiority—technically, it’s not the exact same thing, but for these purposes I think the desire for fame and the desire to feel superior can be lumped together.)

The desire for superiority can be narrow – Professor X may want to feel superior to Professor Y in his same small department. Or it can be broad—Person X may want to feel superior to vast swathes of the deplorables in society as a whole. The refrain “we’re doing this for the children” is perhaps the best indicator that the real reason behind a policy position is the desire to feel superior.

4) The desire to control and have power over other people. Again, this is a mostly overlooked driver of human action. It is highly pleasurable to most people to push others around, whether they admit it or not.

Bullying is the most commonly remarked upon manifestation of this tendency, but it occurs everywhere in human relations, and in political systems—see, e.g., Orwell’s depiction of Communism in Animal Farm. Pushing others around is often justified by the pusher as doing something “for their own good,” when it is really the psychological good of the advocate that is being advanced.

5) The desire for transcendence—for meaning in one’s life. This is often the most important reason anyone does anything, and public policy advocacy is no exception. The advocacy itself may provide the meaning—“I am doing something.” But the advocacy itself may be a second-order effect. That is, the advocacy itself does not provide transcendence, but a particular person may find transcendence through a larger frame, of which the advocacy is merely a manifestation.

For example, religious belief may dictate a specific public policy, such that advocating the policy is implementing the framework that gives the advocate’s life its meaning. A pro-life activist is not given transcendence simply by fighting against abortion, but because that is part of a larger framework giving his life meaning.

Religious transcendence is easy to understand and identify; the two things necessarily go together. Thus, the innate nature of the human desire for transcendence is best seen not in religion, but in religion substitutes—notably Communism, but that was (and is) only the progenitor of a wide range of mostly left-wing religion substitutes, including environmental extremism and certain brands of feminism.

As Chesterton did not say, but should have, “When man ceases to believe in God, he does not believe in nothing, he believes in anything.”

As can be seen from this, there is very rarely any such thing as purely disinterested advocacy of a public policy. If you listen to those who publicly and loudly advocate public policies, they would have you believe that #1 is the only possible reason they advocate any particular public policy.

In fact, numerous people in this media-centric age have made a living out of casting themselves as impartial philosopher-kings, advocating public policies for supposedly purely rational, disinterested reasons. So, any time a Neil deGrasse Tyson or Bill Nye pushes a public policy (usually left-wing, although that’s not germane to this discussion, but may be indicative of something, as I discuss below), they claim to be driven by pure objective reason, but they are in fact driven by some combination of these factors.

The trick is finding out which factors are dominant, and using that to determine whether the advocacy has any merit for society at large, since factors #2 through #5 are in essence inapplicable to or antithetical to society at large, such that if any combination of those dominate, the advocacy is necessarily defective and should be ignored (and the advocate held in public contempt and, preferably, punished by society).

Let’s take Bill Nye’s position on global warming. He likes to call himself the “Science Guy,” and he got his start teaching children scientific facts through clever demonstrations of science experiments in educational programs. More recently, though, he’s taken aggressive public stands on public policy issues, of which global warming is only one (others include pushing for abortion rights and endorsing Barack Obama for political office). Why has he done this?

One possibility is that he has analyzed these policies and decided they’re objectively correct, and the world can benefit from his thoughts, without any benefit to him. Maybe.

He refuses to state his public policy advocacy rationales with any specificity, other than the usual vacuous and false “all the experts say global warming is an existential threat and we must pay any cost, immediately, to address that threat,” and he maintains the usual refusal to debate or even acknowledge competing viewpoints. So it’s hard to tell if he has done an objective, internally consistent analysis at all, though there is no indication he has.

But even if he has done so and that’s a reason for his advocacy of a global warming alarmist position, it’s only one reason. With respect to the other four possible reasons,

(a) Nye may or may not get more money as a result of his advocacy, but he definitely risks no financial penalty, since all the platforms on which he appears are controlled by those on the Left, who agree with him, and he gets job security because he can cry “persecution” if he is denied any job;

(b) he most definitely gets to feel superior, and to be repeatedly lauded as such on numerous public platforms, while making and being applauded for denigrating comments about those who disagree with him;

(c) he most definitely gets to control and have power over other people, by the nature of being a recognized Important Person whose advocacy is relevant, and by the declared intent of his preferred policies being massive direct control over billions, including by direct mandate and by limiting their life choices by making energy more expensive;

and (d) he probably achieves meaning in his life by his advocacy, although this is hard to tell without more evidence from Nye himself, being largely internal. But it is common for the successful (especially atheists like Nye) to, in the twilight of their careers, seek for larger meaning and a way to feel like they “made a difference,” and so transcendence is likely a reason for advocacy in his case—perhaps the overriding reason.

Therefore, based on this analysis, we can conclude that Bill Nye’s advocacy demanding public policy changes in response to global warming is largely or wholly worthless, since it is largely or wholly based on rationales that do not apply to society as a whole, but merely advance Bill Nye’s personal interests.

The same analysis applies, actually, to nearly all global warming alarmists, but even more strongly so. One frequently hears global warming alarmists jeer nervously at those who oppose their analysis and prescriptions, with some variation of “why would the experts claim it’s a problem if it’s not?”

These four reasons are why. Massive amounts of money all around the globe flow only to those pushing global warming alarmism; penury and obloquy are the lot of any scientist who dares to suggest not merely that global warming is a myth, but who makes any suggestion that cost-benefit analysis should apply or that it is possible we don’t actually understand climate at all (see, e.g., Roger Pielke).

(This is exacerbated by climate science being the short bus of science; the truly gifted go into areas like physics and have more options for making a living). The superiority that oozes off alarmists is so thick it nearly assumes physical form. All the solutions of global warming alarmists involve massively increasing power over ordinary citizens, by both government and by the advocates of political action based on global warming alarmism (see, the common demand that people who disagree with global warming alarmism be put in prison, or in some cases, the public demand they be killed).

And, most of all, global warming alarmism is very clearly a substitute religion, providing transcendence to its advocates, together with all the indicia of a religion, from sins to redemption to priests to indulgences.

So, while it appears plausible to a neutral observer (say, me) that modifying the atmosphere could have deleterious effects, and an objective analysis with that as a starting point would be nice, we can conclude that the alarmist industry as it exists is not primarily, or even to a significant degree, driven by objective analysis, and almost wholly, or wholly, driven by motives personal to the advocates, who should be held in contempt.

A very few advocates for public policies to address global warming escape this analysis, notably Bjørn Lomborg, but they are few indeed (and the treatment of them by the alarmist industry merely reinforces the above analysis).

Now, not all examples of “experts” pushing public policy are as baldly self-interested as global warming alarmists; they are probably at the extreme range of scientific unreliability due to the accrual of several factors other than rational objectivity. For a less extreme case, let’s take proponents of not feeding children peanuts before the age of three. Probably, the advocates of that public policy were mostly driven by factor #1, objective analysis.

They were just wrong, and most likely fell into various forms of bias and distorted thinking that made their conclusions false. Money was probably not overly important (unlike in the drive for fat-free foods, which was corrupted by money from the sweetener lobby). The other factors may have been important, overall or in certain cases; it is hard to tell.

Certainly, none of the advocates who were so wrong, and killed children with their erroneous advocacy, felt any need to express sorrow or shame, much less face any kind of punishment. This suggests that the desire to feel superior to other and control them is relevant, because a normal personal would feel compelled to abase himself for his error and the harm he caused—but that would undercut the feeling of superiority and control, so it is absent in practice, unless compelled, which it never is for “experts.”

Similarly, this is not to deny that it is possible to go too far the other way. Sometimes it is possible to base public policy on objective analysis. Cranks who reject all scientific evidence, from those who link vaccines to autism to those who think crystals have healing power, are just as subject to factors other than objectivity.

For example, someone who won’t vaccinate his children is subject to failures in #1 (in that the costs to children from not getting vaccinated is greater than even the claimed benefit), and is driven largely by #3 (superiority) and #5 (transcendence).

And there are probably quite a few public policy positions that don’t attract lots of public attention, and are therefore more likely to be based on objective analysis and less biased by other factors (though one can feel superior to, and desire to control, a small group as well as a large one).

Finally, this overall problem, of defective reasons being the real driver behind public policy advocacy, is less of a problem with the reality-based community, that is, with conservatives.

Liberals are more prone to derive their personal sense of meaning from politics, which is one of the reasons they try to politicize all areas of life. If you don’t advocate any public policy, or are neutral on what public policy will be chosen, you do not receive the positive reinforcement yielded by these drivers.

You have to get your personal utility, and your meaning, somewhere else. Conservatives are more likely to not focus on advocating public policies, and when they do are philosophically generally less subject to the temptations of control and transcendence (though, perhaps, not less subject to superiority).

Nonetheless, all people should be subject to the same analysis whenever they advocate for any public policy. And I conclude that trusting “experts,” unless a clear-eyed evaluation of their actual reasons for their positions is first made and the result is totally clear, is a fool’s errand.

Charles is a business owner and operator, in manufacturing, and a recovering big firm M&A lawyer. He runs the blog, The Worthy House.
The photo shows, “The Cigar” by Peter Baumgartner, painted in the latter half of the 19th-century.

The Idea Of Technology

Whenever people are trying to define the modern age, there’s an inevitable phrase that gets tossed around. We hear it all the time – “We are an age of technology.”

And when people are asked what this phrase means, they invariably generate a list – cars, televisions, space probes, computers, the microchip – all things that were mostly science fiction just a hundred years ago. How did we come so far, so quickly?

But are we technological because we have more gadgets than, say, the ancient Egyptians who, after all, did build the pyramids? But our culture is different from the ancient Egyptians. How so?

Our age is technological not because of gadgets, but because of the idea of technology. The gadgets are a mere by-product. The way we think is profoundly different from all previous human civilizations. We perceive things in a systematic way. We like to build conceptual structures. We like to investigate and get at the root causes of things.

We like to figure out how things work. We see nature, the earth, the universe, as a series of intersecting systems. And this difference is the result of technology.

Essentially, we are dealing with two Greek words: techne and logia. Techne means “art,” “craft,” or “handiwork.” But logia is more interesting. It means “account,” “word,” “description,” and even “story.” It is the root of other important words in English, such as “logistics” and “logical.” And it even reaches into the spiritual realm, where “Logos” is intimately connected with the mystery of God in Christianity, where God (Logos) is made flesh in Jesus Christ.

Therefore, technology is not really about gadgets. The word actually means “a description of art,” or “a story of craft, handiwork.” Anything we create is technology. Be it the microchip, a film, a novel, an airplane, or a poem.

But this is only the first layer. We need to dig further. Why do we use a Greek word in the first place? This question lets us dig right down to the foundations.

The word is Greek because the idea is Greek. This is not to say that other cultures did not have technology; they certainly did; the Pyramids are certain proof of that, as are the Nascan lines in the desert. However, we have already established that technology is not about gadgets, or objects that we create. It is a particular mind-set.

Technology is visualizing the result, or perhaps uncovering that which lies hidden within our imagination. It really is still about giving an account of art, about what we can do with our minds.

But how is all this Greek?

The idea of technology was given to us by one specific person – the Greek philosopher, Aristotle(384-322 BC).

At the age of twenty, Aristotle found himself in Athens, listening to the already famous Plato (428 B.C. to 348 B.C.). But the pupil would become greater than the master. Interestingly enough, Aristotle too had a famous pupil – Alexander the Great. Aristotle certainly had the ability to transform the way people thought – down to the present.

It was Aristotle who stressed the need not only for science, but a conceptual understanding of science. It was not enough just to be able to do things, such as craftsmanship that was passed down from father-to-son in his own day, and in many parts of the world today.

It was important to understand how things were; how they functioned the way they did. It was Aristotle who taught us to break down an object into its smallest part so we can understand how it is built and how it operates. Where would science be today without this insight – which we now take as common sense.

But before Aristotle, it was not common sense. The common sense before his time was to accept things the way they were, because the gods had made them that way, and who were we to question the will of the gods. This was the pre-technological mindset.

Aristotle, like Plato before him, taught that nature and human beings behave according to systems that can be recorded and then classified, and understood and then applied. These categories provided mental frameworks within which we could house our ideas.

Therefore, if nature is a system (and not mysterious and unknowable), then it can be understood. And if it can be understood, it can be controlled. And if it can be controlled, then we can avoid being its victims.

Our ability to classify, categorize, and explain – in short, our technology – is the invention of Aristotle. Before he came along, we were only groping in the dark – if we dared grope, that is.

 

 

The photo shows, “Toronto Rolling Mills,” by William Armstrong, painted in 1864.

Organic Food Is A Myth

Organic food is a myth. When people talk about “organic” food, they believe that “organic” food is free of pesticides and chemicals – or worse, that it is wonderful for the environment. They could not be more wrong.

Organic food uses pesticides and chemicals – and it harms also harms the environment.

Although it’s true, organic farming does restrict the use of some pesticides, however it does allow others. Therefore, this does not make organic food particularly better than food grown on a “conventional” farm. After all, the organic food program doesn’t address food safety.

Thanks to the environmentalist movements of the 1960’s, farmers have had to update their chemical pesticides – thus leaving a negligible difference in the pesticides used in “organic” food and “conventional” food. For example, the pesticide Rotenone has danced in and out of being a legally “organic.”

As a general pesticide, it kills more organisms then the intended pest causing a great deal of collateral ecological damage. Worse of all, Rotenone is linked with the development of Parkinson’s in human beings and other vertebrates.

It’s important to remember that just because a pesticide is naturally occurring doesn’t mean that it’s not toxic and harmful. Tobacco is natural, but is far from healthy.

But then why is it called organic food? If these “conventional” foods are so good, then why don’t they get certified as “organic” food, seeing that they’re of equal quality?

The answer is the soil.

The popular alternative to organic food by “conventional” farmers is bombarding their crops with man-made fertilizer. When artificial fertilizer is applied, many of the microbes in the soil get killed off. This lowers soil fertility, but we compensate by just dumping more nutrients.  It is important to understand that crop yield is greater under the artificial system.

But this system is also far from perfect.

The problem with this is that, meanwhile, contemporary organic farms are inefficient – very inefficient. So, to compensate we rapidly expand our “organic” farmland by tearing down forests, draining wetlands, and clearing other ecosystems.

In fact, organic farms leave a larger carbon footprint. Not only because we tear down more ecosystems to compensate for their smaller yields, but because we need cattle dung to nourish organic farms. Those microbes love natural fertilizer (i.e. dung). As you may have guessed, we get that dung from cows. As any vegan will tell you, cows make a lot of carbon. Thus, organic farms not only chop down forests that would get rid of CO2, but they also require CO2 producing animals.

There isn’t any easy answer as to how we solve our agriculture problems, but one thing is certain. When the label says organic, it’s not talking about the apple. Organic foods are not that much healthier, and they are arguably worse for the environment.

 

The photo shows, “The Collective Farm Market,” by Fedot Vasilievich Sychkov, painted in 1936.