The Con Job of the Century?

The Con Job of the Century?

Over the course of the past century, a number of truly awe-inspiring heists have been carried out by con artists, whose modus operandi is to exploit human frailties such as credulity, insecurity and greed. Con is short for confidence, for the con artist must first gain the trust of his targets, after which he persuades them to hand their money over to him. A con job differs from a moral transaction between two willing, fully informed trading partners because one of the partners is deceived, and deception constitutes a form of coercion. In other words, the person being swindled is not really free. If he knew what was really going on, he would never agree to invest in the scheme.

The “Ponzi scheme” was named after Charles Ponzi, who in the 1920s persuaded investors to believe that he was generating impressive profits by buying international reply coupons (IRCs) at low prices abroad and redeeming them in the United States at higher rates, the fluctuating currency market being the secret to his seemingly savvy success. In reality, Ponzi used his low-level investors’ money to pay off earlier investors, support himself, and expand his business by luring more and more investors in. More recently, Bernie Madoff managed to abscond with billions of dollars by posing as an investment genius who could deliver sizable, indeed exceptional, returns on his clients’ investments.

It is plausible that at least some of the early investors in such gambits, who are paid as promised, suppress whatever doubts may creep up in their minds as they bask in the splendor of their newfound wealth. But even those who begin consciously to grasp what is going on may turn a blind eye as the scheme grows to engulf investors who will be fleeced, having been persuaded to participate not only by the smooth-talking con artist, but also by the reported profits of previous investors. Eventually, however, the house of cards collapses, revealing the incredible but undeniable truth: there never were any investments at all. No trading ever took place, and all of the company’s transactions were either deposits or withdrawals of gullible investors’ cash.

Before a con artist is unmasked, nearly everyone involved plays along, either because they stand to gain, or because they truly believe. Sometimes the implications of having been wrong are simply too devastating to admit, and these same psychological dynamics operate in many other realms where most people would never suspect anything like a Ponzi scheme. It is arguable, for example, that the continuous siphoning of U.S. citizens’ income to pay for misguided military interventions abroad constitutes a form of Ponzi scheme. If President George H. W. Bush had never used taxpayers’ dollars to wage the First Gulf War on Iraq in 1991 and to install permanent military bases in the Middle East, then Osama bin Laden would likely never have called for jihad against the United States. If the U.S. military had not invaded Iraq in 2003, then ISIS would never have emerged and spread to Syria and beyond. Such implications are deeply unsettling, and even in the face of mounds of evidence, most people prefer to cling to the official story according to which the 1991 Gulf War was necessary and just, while the terrorist attacks of September 11, 2001, were completely unprovoked, and all subsequent interventions a matter of national self-defense.

The series of bombing campaigns in the Middle East beginning in 1991 are plausibly regarded as a type of Ponzi scheme because the “investors” (taxpayers), have actually paid to make themselves worse, not better, off. Not only have the “blowback” attacks perpetrated in response to U.S. military intervention abroad killed many innocent persons, but the lives of thousands of soldiers have been and continue to be wrecked through dubious deployments abroad. Along with all of the blood spilled, much treasure has been lost. The more than $28 trillion national debt (as of June 2021) is due in part to the massive Pentagon budget, rubber-stamped annually by Congress, to say nothing of the many other “discretionary” initiatives claimed to be necessary in national defense. Afghanistan is a perfect example of how billions of taxpayer dollars continue to be tossed into the wind even as the formal U.S. military presence winds down. The reason why the War on Terror continues on is not because it is protecting the citizens who pay for it or helping the people of the Middle East but because it has proved to be profitable to persons in the position to influence U.S. foreign policy.

One might reasonably assume that anyone who stands to enrich himself from government policies should be excluded from consequential deliberations over what ought to be done, and in certain realms, the quite rational concern with conflict of interest still operates to some degree. With regard to the military, however, there has been a general acquiescence by the populace to the idea that because only experts inside the system are capable of giving competent advice, they must be consulted, even when they will profit from the policies they promote, such as bombing, which invariably increases the value of stock in companies such as Raytheon. Throughout history, there has always been a push by war profiteers to promote military interventions, but Dick Cheney, who served as Secretary of Defense under George H.W. Bush and vice president under his son, George W. Bush, took war profiteering to an entirely new level. By privatizing many military services through the Logistics Civilian Augmentation Program (LOGCAP), Cheney effectively ushered in a period of war entrepreneurialism, beginning with Halliburton (of which he was CEO from 1995-2000), which continues on today, making it possible for a vast nexus of subcontractors to profit from the never-ending War on Terror, and to do so in good conscience. When more people have self-interested reasons for supporting military interventions, then they become more likely to take place.

With the quelling of concerns that conflict of interest should limit the persons who advise the president on matters of foreign policy, the formal requirement that the secretary of defense be not a military officer but a civilian has been effectively dropped, with both James Mattis and Lloyd Austin easily confirmed as “exceptions” to the rule, despite the fact that, not only did both have significant financial interests in promoting war, but each also had a full career in the military before retiring and being invited to lead the DoD. Military men are inclined to seek military solutions to conflict, which is undoubtedly why high-ranking officers are invited to join the boards of military companies, making Mattis and Austin textbook examples of “revolving door” appointments.

Arguably even more ruinous to the republic in the longterm than the rampant conflict of interest inherent to “revolving door” appointments between the for-profit military industry and the government has been the infiltration of the military into academia, with many universities receiving large grants from the Defense Department for research. Academia would be a natural place for intellectual objections to the progressive militarization of society, but when scholars and scientists themselves benefit directly from DoD funds, they have self-interested reasons to dismiss or discredit those types of critiques—whether consciously or not—in publishing, retention and promotion decisions. In addition to the institutional research support provided by DARPA (the Defense Advanced Research Projects Agency), successful academics may receive hefty fees as consultants for the Pentagon and its many affiliates, making them far more likely to defend the hegemon than to raise moral objections to its campaigns of mass homicide euphemistically termed “national defense”.

As a result of the tentacular spread of the military, Cui bono? as a cautionary maxim has been replaced by Who cares? People seem not at all bothered by these profound conflicts of interest, and the past year has illustrated how cooption and corruption may creep easily into other realms as well. Indeed, there is a sense in which today we have two MICs: the military-industrial-complex and, now, in the age of Covid-19, the medical-industrial-complex. This latter development can be viewed, in part, as a consequence of the former, for in recent decades the military industrial complex has sprouted tentacles to become the military-industrial-congressional-media-academic-pharmaceutical-logistics banking complex. Long before Covid-19 appeared on the scene, the Veterans Administration (VA) adopted pro-Big Pharma policies, including the prescription of a vast array of psychotropic medications in lieu of “talk therapy” to treat PTSD among veterans and to preemptively medicate soldiers who expressed anxiety at what they were asked to do in Afghanistan and Iraq. The increase in the prescription of drugs to military personnel generated hefty profits for pharmaceutical firms, allowing them to expand marketing and lobbying efforts to target not only physicians but also politicians and the populace.

Since the initial launch of Prozac in 1986, the pharmaceutical industry has become an extremely powerful force in Western society, made all the more so in the United States when restrictions on direct-to-consumer advertising were lifted by the Food and Drug Administration (FDA) in 1997. Already by 2020, about 23% of Americans (nearly 77 million out of a population of 331 million) were taking psychiatric medications, and those numbers appear to have increased significantly during the 2020 lockdowns, which took a toll on many people’s psychological well-being. As medications are prescribed more and more throughout every sector of society, drug makers exert a greater and greater influence on policy, even as the heroin/fentanyl overdose epidemic, caused directly by the aggressive marketing and rampant overprescription of opioid painkillers, continues on.

Just as the military industry is granted the benefit of the doubt on the assumption that they are helping to protect the nation, the pharmaceutical industry accrues respectability from its association with the medical profession. Who, after all, could oppose “defense” and “health”? In reality, however, for-profit weapons and drug companies are beholden not to their compatriots, nor to humanity, but to their stockholders. War and disease are profitable, while peace and health are not. The CEOs of military and pharmaceutical companies, like all businesspersons, seek to ensure that their profits increase by all means necessary, the prescription opioid epidemic being a horrific case in point. Just as academics may enjoy Defense Department funding, many doctors and administrators of medical institutions today derive essential funding from drug companies and the government, whether directly or indirectly. These connections are immensely important because many politicians receive generous campaign contributions from Big Pharma, which by now has more lobbyists in Washington, DC, than there are congresspersons, and not without reason. Formulary decisions at the VA regarding the appropriateness of prescribing, for example, dangerous antipsychotic medications such as Astrazeneca’s Seroquel to soldiers as sleep aids are made by administrators who are political appointees, as are public health officials more generally.

With a functional Fourth Estate, it would be possible to question if not condemn the conflicts of interest operating in the for-profit military and medical realms. Unfortunately, however, we no longer have a competent press. Throughout the Coronavirus crisis, this has become abundantly clear as alternative viewpoints on every matter of policy have been squelched, suppressed, and outright censored in the name of the truth, when there may have been ulterior motives at play. In fact, the complete quashing of any directives regarding non-vaccine therapies for mitigating the effects of Covid-19—including Ivermectin and Hydroxychloroquine—may be best explained by the simple fact that FDA emergency use authorization of vaccines in the United States is possible only when “there are no adequate, approved, and available alternatives,” as is stated plainly on the specification sheets for the Pfizer and Moderna vaccines.

Regarding the origins of the virus, early claims by some researchers that Covid-19 may have been produced in the virology lab in Wuhan and released accidentally were swiftly dismissed as “conspiracy theories.” Anyone who suggested this eminently plausible origin of the virus was immediately denounced by the media and deplatformed or censored by the big tech giants. “Gain-of-function” research, often funded by the military, involves making existent viruses deadlier to human beings and is said by its proponents to be necessary in order to be prepared for future natural pandemics or in the event that some enemy might use such a virus as a bioweapon. The latter is a familiar line of reasoning among military researchers, invoked also (mutatis mutandis) in nuclear proliferation and the military colonization of space: we must develop the latest and greatest nuclear bombs and effect total spectrum domination of the galaxy before any other government has the chance to do so! Many of the scientists involved in these endeavors may have the best of intentions, but that does nothing to detract from the propensity of human beings to commit errors.

In the case of Covid-19, the origin of the virus was deemed settled because Dr. Anthony Fauci, an ardent apologist for gain-of-function research and the reigning public health guru in the United States, authoritatively insisted that the transition from bats to humans came about naturally. After Fauci’s pronouncement, it seemed a matter of common knowledge to “right-thinking” believers in The ScienceTM everywhere that the virus probably came from the wet market in Wuhan, where live animals were sold as ingredients for use in culinary delicacies such as bat soup. When the World Health Organization (WHO) looked into the matter, they appointed Peter Daszak to lead the investigation. But Daszak had in fact funded gain-of-function research by repackaging and distributing U.S. government funds through his firm EcoHealth Alliance. Needless to say, Daszak had every reason in the world to squelch any suggestion to the effect that he himself may have had something to do with the millions of deaths caused by Covid-19.

We do not yet know whether the virus had a natural or manmade origin, but if in fact U.S. taxpayer-funded research caused the pandemic and millions of deaths, then this would constitute yet another example of a government-perpetrated Ponzi scheme, rivaling and perhaps even surpassing the War on Terror in its negative consequences. We pay for gain-of-function research (determined by bureaucrats such as Anthony Fauci to be a good idea), and then we suffer the consequences when things go awry. Note that, just as Ponzi scheme perpetrators may begin as regular businesspersons before committing fraud, there is no need in the case of Covid-19 to invoke conspiratorial hypotheses. Many politicians who promoted and thereby helped to realize the 2003 invasion of Iraq may have been convinced that Saddam Hussein posed a grave danger to the world. Similarly, there may not have been a conscious intention on the part of anyone to let loose the SARS-CoV-2 (Covid-19) virus on the world. After all, it’s not as though incompetence among government bureaucrats is a rarity.

Whether accidentally or intentionally caused, disasters invariably pave the way for massive power grabs on the part of select persons advantageously situated. Once Iraq had been invaded, this served as the pretext for sacrificing even more blood and treasure as the quagmire intensified and spread to other countries. When the Covid-19 virus arrived on the scene, it became the pretext for a massive and abrupt transfer of wealth. Not only did much of the commerce of small businesses crushed by lockdowns migrate to companies such as Amazon and Walmart, but billions of taxpayer dollars have been poured into pharmaceutical firms.

The multi-trillion dollar Covid-19 aid packages included provisions for research and development, testing, and hospitals. But the most lucrative venture in all of this frenzy has been a vaccine program with universal aspirations. The U.S. government funded the development of the Covid-19 vaccines, and now that they exist, President Biden has purchased 500 million more doses of the Pfizer product to donate to other countries. The global propaganda campaign to vaccinate everyone everywhere with elixirs touted initially by their developers as having up to 95% efficacy, too, has been paid for by governments. It was unclear from the initial press releases about the spectacular new vaccines what efficacy actually meant, as there was a fair amount of equivocation regarding whether the treatments would confer immunity and prevent transmission of the disease or simply lessen the severity of symptoms. After millions of persons had already been vaccinated, it emerged that the reports of 95% efficacy were at best misleading and at worst fraudulent, for the reported percentages were relative risk reduction (RRR) rates, which reflect outcomes only for the small proportion of the population vulnerable to the disease. When the rates are calculated for the general population, the vast majority of whom are not vulnerable to Covid-19, it turns out (as those who declined the vaccine had already surmised on the basis of the survival statistics), that the absolute risk reduction (ARR) rates of the Pfizer, Moderna, Astra Zeneca, and Johnson & Johnson vaccines are quite low, to be precise: 0.84%, 1.2%, 1.3%, and 1.2%, respectively. Nonetheless, aggressive campaigns to require vaccine passports of citizens as a condition on their resumption of normal life are everywhere on display.

A clue that the well-being of patients is not at the forefront of the minds of those running the “vaccine everyone” campaign has been the encouragement of pregnant women and children to undergo vaccination, though neither group is at serious risk from the virus and neither group was included in the trials used to secure emergency authorization. Even more remarkably, against all established science on immunology, the idea that persons who have already recovered from the disease must also “get the jab” has been aggressively promoted all around the globe. Judging by the media coverage, the reason for insisting that persons who were already infected with and have recovered from Covid-19 must also be vaccinated is supposed to be that people can become reinfected with the virus. That line of reasoning, however, is refuted by the statistics for reinfection. As of June 2021, out of nearly 180 million cases of Covid-19 worldwide, there were 148 confirmed cases of reinfection. Studies recently published in Nature and by the Cleveland Clinic conclude that vaccination offers no benefit to previously infected persons.

In the build up to every new war, many people who do not stand to benefit from the intervention and may even be harmed by it often succumb to the propaganda and enthusiastically take up the cause. In the current crisis, the false dichotomization into two exhaustive and mutually exclusive categories, the enlightened science lovers and the anti-vaxxers, is also a part of a propaganda campaign. The persons who have declined vaccination, either because they already survived Covid-19, or because they prefer to wait for longterm safety data and do not believe that the possible benefits outweigh the unknown risks, are dismissed as crackpots, when in fact they are simply being prudent. Yet the media persists in propagating a misleading depiction of vaccine hesitancy in this specific case as proof of hostility toward science. This sort of polarization of the populace is, needless to say, on display during wartime as well, when anyone who dares to oppose a military intervention is depicted as a supporter of a tyrant abroad or an irrational pacifist or, when all else fails, a simple traitor.

It would be incredibly naïve to fall prey to the idea that pharmaceutical executives are somehow philanthropic, for they command enormous salaries for maximizing their stockholders’ profits. In 2020, Pfizer CEO Anthony Bourla enjoyed a 17% increase in compensation, to $21 million, while Moderna’s CEO, Stéphane Bancel, became a billionaire. The pharmaceutical industry and the military industry, despite comprising publicly traded companies, are prime examples of “crony capitalism”, benefiting as they do from large infusions of cash from the government, which is allocated by bureaucrats many of whom have career and other financial interests at stake. Moreover, the funding links between the military and the public health and pharmaceutical sectors form a tangled web. Not only did the Department of Defense receive a chunk of the Covid-19 rescue packages, but gain-of-function research has been paid for by military institutions. Indeed, much of the funding provided to Peter Daszak for redistribution by EcoHealth Alliance derived from the U.S. Department of Defense.

Both the for-profit military and for-profit pharmaceutical industry now use the mainstream media as a propaganda outlet to further the interests of their shareholders. Even the independent media have been infiltrated by pro-military and pro-pharma voices, which is why falsehoods such as “Saddam is in cahoots with Bin Laden and has WMDs!” and “Lockdowns save lives!” are able to gain such traction among the populace. That liberty-restricting policies should be lifted only on the condition of vaccination requires people to believe that the mediation policies were both necessary and effective. But in the United States, the differences in outcomes in various states do not appear to depend on the timing or extent of lockdowns. Nonetheless, just as the mass surveillance and collection of people’s private data was accepted by many as a necessary part of the War on Terror, many persons with no financial interests at stake now rally on behalf of Big Pharma for universal vaccination.

The global propaganda campaign to require people to show health papers or a “vaccine passport” in order to participate in human society—to travel, dine out, shop or even gather together in groups—reveals that the mistakes made by a few actors are being seized upon to exert more and more control over the population. The mass surveillance of Americans was accepted by many as necessary, given the potential dangers of factional terrorism, and now, having spent more than a year whipped up by the media into a paralyzing state of fear for a virus which kills less than 1% of the persons it infects, many citizens appear willing to accept what influential globalists have been insisting must be “the new normal”. This is a grave mistake.

It is too early to know how this unprecedented chapter in human history will end, but the trends are not encouraging. With countries continuing their serial lockdowns, travel restrictions, masking, testing, and quarantine requirements, they deepen the divisions already on display making it seem more likely that some form of apartheid state with totalitarian qualities will emerge. Does any government have the right to force its citizens to undergo a medical treatment for which, according to all available statistical data, they have no need? Why are universities requiring vaccination as a condition of enrollment and employment? Why are more doctors not rising up to challenge the aggressive push to vaccinate everyone everywhere with an experimental treatment? There is no medical basis whatsoever for requiring previously infected persons to undergo vaccination, which has never been demanded in the case of any other disease.

What is at stake is not merely inconvenience, and the solution is not, as some liberty lovers have suggested (if only facetiously), to acquire a forged vaccine passport. We should reject in the most categorical of terms the very idea that anyone anywhere should be required to prove his health status to anyone else and that anyone anywhere should be compelled to undergo a medical treatment against his own will—whatever his reasons may be. One’s medical choices affect one’s health, well-being and body, which no government can be said to own. To relinquish one’s right to one’s own body is to render oneself the property of a tyrannical state. If citizens permit the government to strip them of their right to make decisions about how to lead their very own lives, then they will have been fleeced far worse than the victims of the most mercenary Ponzi scheme, having paid with their freedom for their future enslavement.

Moral Rhetoric vs. Reality

Moral Rhetoric vs. Reality

Philosophers tend to divide normative theories of morality into two broad categories: deontological and teleological. Deontological theories prioritize right action over good outcomes. If an action is wrong, then it is intrinsically wrong, regardless of the consequences which may ensue. The Ten Commandments and Kant’s Categorical Imperative are classic examples of deontological theories, and the libertarian non-aggression principle (NAP) is another one: Do not initiate violence against any person or damage or steal his property. Teleological theories, in contrast, define rightness in terms of goodness. One determines what to do in part—if not exclusively—by considering the likely outcomes or consequences of one’s prospective action.

Arguably the most famous teleological theory is utilitarianism, articulated by British thinkers Jeremy Bentham and John Stuart Mill in the late eighteenth and early nineteenth centuries. According to the simplest formulation of utilitarianism, what one should do is always act so as to maximize the good outcomes (happiness or pleasure or something else positive—Bentham and Mill called this “utility”), and minimize the bad outcomes (unhappiness or pain or something else negative) for the greatest number of people. Without delving too deeply into what consistently applied utilitarianism would actually entail, the idea seems prima facie reasonable to many, and is appealing to “social justice warriors” and others who believe that the government has and should play an important role in improving the lot of the citizenry through engineering the society in which they live. This basic outlook informs socialist economic theories according to which wealth should be redistributed so that the goods of society are shared rather than “hoarded” by the small percentage of the population comprising the elites.

The theoretical problem with utilitarianism is that there is no hard limit on what can be done to a few people in the name of the net good of the greater group. Everything is, in principle, permissible, depending only on the context and likely consequences. If torturing or killing one innocent person will save the rest of humanity, then it may in fact be the right thing to do, according to utilitarianism. The hypothetical scenarios used to elicit utilitarian responses tend to be highly simplistic, such as the “Trolley problem” discussed in many college ethics courses. One version of the Trolley problem involves a conductor who must decide whether to kill five people (say, senior citizens) on one track, or to divert his car to another track and thereby kill three other people (say, toddlers). Those who devise such thought experiments are attempting to isolate the variables, rendering it possible to gauge sympathies for or against utilitarianism in spite of the inherent complexities of reality.

Because human beings live in societies, the political realm abounds with utilitarian-esque rationalizations for any- and everything. Currently many of those calling for universal vaccination against COVID-19 are reasoning as utilitarians when they presume that the relatively small number of outlier deaths and severe harm caused to a few of those vaccinated will be vastly outweighed by the lives saved. Those who decline vaccination are denounced in the harshest of terms as “selfish,” when in fact they may simply disagree with either the projected result (that millions of people will be saved from the virus and few killed by the vaccines) or else the risk calculation in their own case, based on the statistical data for COVID-19 vulnerability and the complete absence of data on longterm vaccine side effects. That competent individuals alone should make determinations of which risks to assume is a deontological position, denying as it does that “the greater good” is a sound pretext for stripping persons of their liberty and right to control their own body. Forced vaccination would constitute a flagrant violation of the libertarian’s non-aggression principle, so for libertarians who support universal vaccination, the only consistent approach is to persuade others to join them in rolling up their sleeves.

On the economic front, one occasionally finds people today explicitly asserting that humanity would be much better off, for example, if all of Amazon founder Jeff Bezos’s massive wealth were taken from him and used to put an end to world hunger. The people who make such suggestions (when they are serious), appear to assume that the accumulation of wealth is a zero-sum game, and they reject the “trickle-down” economic theories which may inform a more liberty-forward approach. Supporters of a socialist agenda are wont to ignore the lessons of failed experiments such as that of the former Soviet Union, maintaining that if only socialism were implemented correctly, then the world would be a better place. Needless to say, the persons to be harmed in such hypothetical scenarios tend not to agree with what would be the sacrifice of themselves or their property for the greater good of everyone else. Senators Bernie Sanders and Elizabeth Warren, for example, have been known to take aim at Bezos despite the fact that each owns multiple houses but neither offers them (as far as I know) as shelter to persons worse off than themselves. Critiques of the “failure” of Amazon to pay any taxes are especially odd coming from the very legislators who write and ratify laws which permit companies to take advantage of loopholes in order to avoid paying taxes.

In any case, the same critique, that our society tolerates “obscene” disparities in wealth, can be directed toward anyone whose material conditions are significantly better than anyone else’s—which is arguably everyone in the United States, all of whom are better off than most of the people inhabiting third world countries—and yet chooses not to redistribute his own property. As much as caricatures may abound of libertarians as rich old white men unwilling to share their wealth with the descendants of the victims whom their great-great-grandparents oppressed, no one agitating for the mass redistribution of other people’s wealth need be taken seriously unless they make themselves into the extraordinarily rare example of someone willing to invite everyone less off than they are into their own home. Until their comportment is modified to match their rhetoric, the shrill virtue-signaling of Bezos haters and others of their ilk can be safely ignored.

Needless to say, such conflicts between moral rhetoric and reality are ubiquitous. People who denounce manmade climate change sometimes fly to global warming conferences in private jets. Nor do those who incessantly warn about global warming typically renounce their private cars, even when they live in cities with efficient public transportation systems. People who express concern about environmental pollution and the ocean life blighted by plastic waste may nonetheless continue to imbibe water from single-use bottles. That moral rhetoric and reality so often diverge illustrates what is the practical problem with implementing anything even vaguely approaching utilitarianism and is metaphorically expressed by George Orwell’s Animal Farm. The truth is that human beings, as a matter of fact, care much more about themselves and their family members and friends than random compatriots. Moreover, they largely ignore the plight of persons beyond their own borders, even when the taxes levied on their personal income have been used to generate widespread misery abroad. It is utiliarianian-esque reasoning when someone claims that wars may harm some people but on balance serve the aims of democracy and peace. Most of the victims of wars over the past century have been unarmed civilians, not soldiers, but their “sacrifice” is nonetheless reimagined by those who support every new war proposed as having contributed to the establishment of a better world.

The prevalence of this type of rhetoric, and its associated pseudo-moral rationalizations for policies which harm or even destroy other people, explains bizarre phenomena such as Speaker of the House of Representatives Nancy Pelosi’s public expression of gratitude to George Floyd for having been killed by police officer Derek Chauvin. Many people found Pelosi’s statement inappropriate and tone deaf, but she was essentially reciting a version of the same script which is rehearsed every single time soldiers are sacrificed needlessly and so-called collateral damage is “tolerated” in wars perpetrated abroad. Slogans such as “Freedom is not free!” are frequently slung about by military supporters, who assume that, on balance, the comportment of the U.S. Department of Defense has been good, even if mistakes are sometimes made, and even if a few “bad apples” emerge here and there to perpetrate the occasional atrocity, for example at My Lai or in the Abu Ghraib and Baghram prisons. Judging by their docile acceptance of the foreign policy of Bush, Obama, Trump and now Biden, most Americans have yet to acknowledge that the twenty-year “Global War on Terror” (GWOT) has been a colossal failure: politically, economically and, yes, morally. The only people to have benefited from the non-stop bombing of the Middle East are war profiteers. Some people are more equal than others.

The long entrenched dogma that, all things considered, the world is a better place because of U.S. military intervention abroad explains why citizens continue dutifully to pay federal taxes while delegating all policy-making decisions to the legislature, who in the twenty-first century flatly renounced their authority to decide when and where war should be waged. The AUMF (Authorization for Use of Military Force) granted to President George W. Bush in October 2002 has been invoked by every president since then to claim the authority to bomb anyone anywhere in the world where the executive branch of government has deemed such action desirable.

“We are good, and they are evil,” is a time-tested trope which allows government administrators, whether elected or appointed by those elected, to get away with anything, on the pretext that the evil enemy must be defeated, and the perpetrators of mass homicide are acting only and everywhere so as to protect their constituents. Or to spread democracy and save the world from a despicable tyrant, all of which are essentially equivalent, or so the rhetoric goes…In the lead-up to every new war, citizens, having been subjected to vigorous fear-mongering propaganda campaigns according to which their very lives are at stake, tend momentarily to forget that politicians are liars. They listen attentively as quasi-utilitarianism is trotted out yet again to secure popular support for bombing campaigns through soundbites such as: “The war will pay for itself!” “We will be welcomed with flowers as liberators!” “The conflict will be short—in and out—with minimal collateral damage!” When the real consequences prove to be nothing like those projected by hawkish “experts” with financial ties to military industry, the warmakers then revert to defending themselves by appeal to their good intentions.

War advocates are able to sleep at night not because of utilitarianism, according to which the rightness of a war is determined by its outcomes, which any rational and informed person must own have been catastrophic throughout the Middle East, but because they have another theory to whip out in their defense whenever their “good wars” have infelicitous or even appalling consequences. That framework derives from just war theory, specifically, the doctrine of double effect, according to which what really matter, in the grand scheme of things, are the warmakers’ own intentions. “Stuff happens,” explained former Secretary of Defense and sage epistemologist Donald Rumsfeld in assuaging concerns that the conditions on the ground in Iraq were chaotic, with monuments and museums being looted, persons murdered and maimed, robbed and raped, among other unanticipated results of the 2003 bombing campaign.

Policymakers such as George W. Bush, Dick Cheney, Condoleezza Rice, Paul Wolfowitz, and Tony Blair may assuage their conscience by professing the purity of their own, subjective, intentions: “We meant to do well!” Along these lines, ancient Greek philosopher Socrates reputedly quipped, “No one knowingly does evil,” by which he may have meant that everyone seeks what they regard as good and avoids what they regard as evil. What, after all, could they base their actions on, if not their own values? In other words, viewed at the level of individual action, “We meant to do well!” may hold true in the case of anyone who does anything, from the thief who steals to feed his family, to the serial killer who derives immense pleasure from destroying other people, to the warhawks and profiteers who persist in perpetuating and even expanding the War on Terror, though it has already destroyed or degraded the lives of thousands of Americans and millions of persons of color abroad.

Some people are more equal than others is assumed by anyone who claims to wish to even the economic playing field at home while altogether ignoring the plight of the millions of people who are not only not earning $15 per hour for their labor but in fact have been killed as the so-called collateral damage of wars supported or condoned by lawmakers with financial interests at stake. The forever war in the Middle East and Africa plods on with little protest, and some of the very people who vociferously demand justice for individual victims of police brutality such as George Floyd turn a blind eye to the plight of the thousands of victims of the bombing campaigns, despite the fact that the former can be said to derive in part from the latter. Not only does the Federal government set a highly visible example of how to resolve conflict through the continual perpetration of mass homicide, but police departments have been furnished with military equipment and are staffed in many places by veterans of U.S. wars, some of whom apply wartime techniques and tactics in combating crime.

With regard to the killing of persons of color within the United States, we have witnessed former President Barack Obama making public pronouncements on the outcomes of the George Floyd and Trayvon Martin cases, while declining to say anything whatsoever about his very own administration’s targeted killing of sixteen-year-old Abdulrahman al-Awlaki, a U.S. citizen incinerated along with a group of his friends by a missile launched by the U.S. government from a drone flying above Yemen in 2011. If presidents themselves can simply pretend that some of their very own victims never even existed, then it should not be all that surprising when Americans more generally follow their lead.

Self-styled progressives, for example, may agitate for the restriction of firearm possession domestically, while ignoring altogether the exportation of weapons in record numbers (since Obama’s presidency) to regimes and factions in Syria and other places where they are predictably used to harm human beings, primarily persons of color, on a completely different magnitude than occurs within the country where the weapons are produced. It is of course possible consistently to maintain, as do advocates of the right to bear arms, that guns are morally neutral but become implements of murder when wielded by murderers. But anyone who insists that gun possession leads to murder within the United States would seem to be committed, logically speaking, to the position that the many innocent persons killed abroad by U.S. weapons (whether by the U.S. military itself or by governments, factions or individuals armed by them) were, materially speaking, the murder victims of those who furnished the killers with the weapons. And yet, some (not all) of those who dispute citizens’ Constitutional right to bear arms are not only silent on the issue of weapons exportation but in fact complicit in enriching this industry and sowing the seeds for mass homicide abroad through their uninterrupted payment of federal taxes.

A similarly untenable duality would seem to be Senator Bernie Sanders’ outspoken opposition to capital punishment, which he manages to hold within his mind while simultaneously supporting the use of unmanned combat aerial vehicles (UCAVs), or lethal drones, to kill terrorist suspects abroad. One of the most cogent arguments for abolishing the death penalty derives from the indisputable fact that convicted persons are sometimes exonerated posthumously. Mistakes are made, and erroneous executions are irrevocable. An equally compelling argument concerns racial justice. Among all convicted murderers, a disproportionately high percentage of persons of color are sentenced to death, in all likelihood because juries and judges perceive them to be more dangerous than white murderers. But each of these lines of reasoning applies a fortiori to the persons eliminated by missiles launched from drones in countries where nearly everyone is a person of color, and the victims are not even charged with crimes, much less given the opportunity to defend themselves against their killers’ allegation that they are evil terrorists who deserve to die. Why should a suspect have more rights within than outside the arbitrarily drawn borders of a land? If suspects have rights, then does it matter where they happen to stand? And if even convicted murderers should not be executed, as Sanders appears to believe, then how can mere suspects abroad be annihilated on the basis of purely circumstantial evidence such as SIM card data, drone video footage and the bribed testimony of destitute, and therefore corruptible, informants on the ground?

It may be tempting to conclude from examples such as Senator Sanders that lawmakers and the citizens who elect them and pay their salaries are simple hypocrites. It is more charitable, however, and at least as plausible, that they have been trained effectively to compartmentalize spheres of reality so that what seems obviously desirable within one domain has no implications whatsoever for anywhere else. Modern people have been effectively conditioned so as to find nothing wrong with applying completely different standards to different spheres of reality. Their rhetoric may be absolutist, but the moral requirements upon them as individual moral persons are assumed to be a function of the context and circumstances. No less than the politicians who enthusiastically advocate for bombing abroad while decrying police brutality in the homeland, most people appear to hold a motley assortment of arguably contradictory moral beliefs, which they apply to different groups of people according to caprice and mostly determined by what they have been indoctrinated to believe, above all by the media. In effect, modern people have developed split personalities. The innocent victims of Barack Obama’s and Donald Trump’s and now Joe Biden’s perpetual motion bombing campaigns do not exist in the minds of those who ordered or paid for their deaths, and are therefore excluded from all moral calculus.

The smallest sphere of morality, or moral community, comprises one’s self. At this level, morality and prudence coincide. Applying utilitarian reasoning to one’s self alone yields a theory according to which one should maximize one’s own happiness (or pleasure or well-being), even at the expensive of others, because they lie beyond the bounds of the sphere under consideration. The next smallest sphere of morality includes one’s family. After that, one’s friends may be included. Then one’s neighbors, one’s compatriots, and finally humanity. No finite person can perform a full and accurate utilitarian projection of the results of his prospective action on all of humanity, and people generally consider only the short-term effects on the persons with whom they interact and of whom they are directly aware. The answer to the question “What should I do?” will vary greatly depending on whether one considers the moral community to comprise one’s self (ethical egoism) or one’s compatriots (nationalism) or humanity (globalism). Utilitarian-esque rhetoric pervades public discourse because it seems reasonable and sounds “moral” (rather than “selfish”), but most people either do not recognize or do not agonize over the manifest inconsistencies between what they say and what they do in the various communities in which they interact.

Avoiding altogether this morass of moral relativism, the libertarian upholds the non-aggression principle (NAP), which is an easily applicable proscription: Do not initiate—or threaten—violence against other human beings. Period. Do not indulge in casuistic rationalization of why it is supposedly right to bomb countries abroad when in fact there is near certainty that persons of unknown identity (and therefore not known to deserve to die) will be destroyed, no matter what the warmakers’ intentions may be. Libertarians have many outspoken, virtue-signaling enemies these days, but in fact their theory is consistent, including as it does all people everywhere. If it is wrong for government agents (such as police officers) to kill suspects in the homeland, then it is equally wrong for government agents (such as drone operators) to kill suspects abroad.

Most of the federal discretionary budget goes to the military, which is why utilitarian-esque defenses of federal taxation are delusive, especially in view of the twenty-year War on Terror fiasco. Their rhetoric notwithstanding, the policymakers who determine how much to tax citizens and where federal funds are to be allocated prioritize the interests of not humanity, nor their compatriots, but the MIC, or military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex, all tentacles of which have teams of lobbyists in Washington, DC. In order to be completely consistent, then, it may be that libertarians should join the ranks of the war tax resisters, which is however easier said than done, given the harsh and coercive measures deployed by the state, again, in the name of “the greater good.”

Pascal’s Wager for COVIDystopic Times or: How I Learned to Stop Worrying about the Coronapocalypse and Eat Krispy Kreme Doughnuts

Pascal’s Wager for COVIDystopic Times or: How I Learned to Stop Worrying about the Coronapocalypse and Eat Krispy Kreme Doughnuts

Being of a naturally skeptical bent, I have harbored doubts from the very beginning about the upheaval of the entire world rationalized by politicians everywhere because of a virus which kills less than 1% of the people it infects. I watched in amazement as country after country closed their borders to foreigners, imposed “common sense” quarantines, lockdowns and mask mandates, and shut down entire economies. I was perplexed by the inability of anyone in the position to craft policies to recognize that what really needed to be done was to isolate vulnerable persons, allow everyone else to go about their business, and eventually we would achieve herd immunity.

This approach was rejected early on as untenable because, it was claimed, COVID-19 was simply too elusive. In contrast to many other deadly viruses known to mankind since time immemorial, we could not develop herd immunity to COVID-19, because there were documented cases of persons who had become reinfected after having already recovered. To my mind, that was the first red flag that perhaps the virus had not simply leapt from bats to humans when some hapless soul in Wuhan ate a bowl of soup. I started to wonder whether this was not some sort of Frankenstein gain-of-function virus, engineered in a lab by DARPA-funded scientists under the guise of national defense, to figure out what to do in case some other government developed such a virus to wipe out its sworn enemies.

The idea that COVID-19 was developed in a lab and accidentally released by human error was rejected by all of the CNN-certified authorities, so I naturally listened to the science and began focusing on other matters, such as whether the project of inoculating all of the 9 billion people on the planet with a vaccine might be a way of ending the pandemic. There were plenty of companies enthusiastic to pursue this project, and within months Pfizer, Moderna, AstraZeneca, and Johnson & Johnson, in addition to a variety of companies in Russia and China, had already developed their vaccines, having been generously funded by governments so obviously keen to save lives.

Fine, I thought to myself. Now everyone who is vulnerable can get the vaccine, and those who are not can go about their business, become infected and then recover from the virus and its associated symptoms upon robust people, such as the “blah” feeling reported by Tom Hanks upon landing on Australian shores in March 2020 shortly before that entire country closed its borders seemingly forever. There was no question in my mind that we were on the way to the exit ramp of the highway to a dystopic world where no one is allowed to travel or congregate in groups for fear of transmitting the virus to persons who might die as a result. The situation was easy to comprehend by appeal to Pascal’s wager (mutatis mutandis):

 

The Question of Efficacy in Preventing Transmission and Infection

 

Take the vaccine Don’t take the vaccine
The vaccine prevents transmission and infection  

Everyone who takes the vaccine will be protected from everyone else—whether or not they take the vaccine

 

Those who take the vaccine will be protected; others will remain vulnerable to COVID-19

 

The vaccine does not prevent transmission and infection  

No one who takes the vaccine will be protected from other people—whether or not they take the vaccine

No one will be protected—whether or not they take the vaccine

 

Further doubts, however, began to creep into my mind as I witnessed a variety of zealous public relations efforts to persuade people invulnerable to COVID-19 to get the vaccines. Front and center in luring the public to do what the Centers for Disease Control (CDC) have determined must be done have been COVID-19 guru Dr. Anthony Fauci and vaccine entrepreneur Bill Gates, who incidentally has revealed in interviews his fabulous financial success in the vaccine sector. I think that everyone, on both sides of the COVID-19 lockdown divide, can agree that a twenty-fold return on his investment is nothing to scoff at.

Fauci got right to work promoting the Moderna vaccine by pointing out to African Americans that, in fact, the vaccine was developed by a black woman. This struck me as an odd selling point, and I confess to have suspected racism. I looked up Dr. Kizzmekia Corbett on Twitter and found this on her profile: “Virology. Vaccinology. Vagina-ology. Vino-ology.” Not sure that the latter two count as credentials, but one thing is clear: vaccine hesistancy among African Americans has a well documented and understandable history, resulting in part from the horrifying Tuskegee experiments, in which black men infected with syphilis were left untreated “just to see what would happen.” That’s right: nonconsensual human experimentation was not the province only of the Nazis. It has happened right here, in the United States, as well. In turning Dr. Corbett into something of a media darling, Fauci’s idea appears to have been that people would be persuaded that a black woman would never dream of acting so as to harm other black people. That line of argumentation is unfortunately impugned by the fact that black nurses were among the perpetrators of the Tuskegee study. Indeed, the program coordinator, Eunice Verdell Rivers Laurie, was an African American woman. Nonetheless, Fauci may have succeeded in convincing some people to roll up their sleeves, to wit, those entirely ignorant of the details of the disturbing Tuskegee saga, which lasted a shocking forty years.

My next concern arose when some “experts” began exhorting pregnant women to “get the jab,” insisting that there was no evidence of harm to pregnant women from the new vaccines. I decided to look into the studies done before the emergency authorizations and discovered that pregnant women were not included in the first round of human trials. This finding naturally reminded me of the disturbing story of Thalidomide. That drug seemed very safe in initial clinical trials, which, however, excluded pregnant women. Ultimately, 40% of the babies of women who had been given Thalidomide as a remedy for morning sickness died around the time of birth. Of those who survived, thousands were born deformed, many with fin-like limbs. As is always the case, it took time for the longterm side effects to be sorted out. That is because each patient is unique, with different biological and environmental factors, including the medical treatment in question, acting upon her body. Approved in 1956, Thalidomide was not pulled from the European market until 1961. Why would anyone be encouraging pregnant women to “get the jab,” given the well-documented history of Thalidomide and the apparent invulnerability of infants and small children to the COVID-19 virus? I puzzled. After all, the word teratogen exists because there are substances which predictably lead to birth defects, and they are discovered when, and only when, pregnant women are exposed to those substances. Thinking about the case of Thalidomide and possible side effects provoked another Pascal’s Wager assessment:

 

The Question of Unknown Side-Effects—Both Short-Term and Long-Term

 

Take the vaccine Don’t take the vaccine
The vaccine prevents transmission and infection  

Those who take the vaccine will be protected from COVID-19 but may suffer side effects—up to and including death

 

Those who do not take the vaccine will not be protected from COVID-19 but will not suffer any side effects.

 

The vaccine does not prevent transmission and infection  

Those who take the vaccine will not be protected from COVID-19 and may also suffer side effects—up to and including death

Those who do not take the vaccine will not be protected from COVID-19 but will also not suffer any side effects.

 

The worst case scenario would be that the “vaccines” do not actually work and also have devastating side effects. Clearly, then, the rational choice for a given person is going to be a function of how vulnerable he or she is to the disease which the vaccines are intended to protect against. If one has a 99.5+% chance of surviving COVID-19, has no known comorbidities and therefore is unlikely to suffer severe illness, even if infected with the virus, then it is difficult to see why he or she would want to opt for the treatment, given that the risk of longterm side effects is entirely unknown—ranging anywhere from 0% to 100%. Fine, I concluded again. People who want the vaccine can get the vaccine, and everyone else can resume their normal life. Yet Fauci & Co. did not agree. I continued to puzzle over pregnant women being enthusiastically exhorted to “get the jab,” and those concerns were exacerbated when vaccine trials on children began, complete with a social media campaign featuring images of “heroic” pro-science kids rolling up their sleeves.

Eventually, after reflecting on this conundrum for quite some time, the firm believer in freedom of choice in me capitulated, concluding that, as in everything else, parents and pregnant women would have to decide what to do for themselves and their offspring. I decided to move on to other matters, as it was obviously futile to engage further with the mobs of people online who have redefined “prudential person” to mean “antivaxxer”. Instead, I turned to the rational grounds for believing that Moderna and its diverse research team have succeeded in producing a COVID-19 vaccine, which is defined by the CDC as follows:

Vaccine: A product that stimulates a person’s immune system to produce immunity to a specific disease, protecting the person from that disease.

Going directly to the source, Moderna’s own website, I learned that the company specializes in gene therapy and has been operational for a grand total of ten years. They received a substantial DARPA grant in 2013, but have no FDA approvals for vaccines or devices to date, aside from the emergency authorization granted in December 2020 for the COVID-19 treatment. All of the COVID-19 therapies, whether m-RNA (as in Moderna’s case) or vector based, have been labeled “vaccines” not only in the hope that they may act as vaccines, but also in order to benefit from the legal immunity enjoyed by vaccine manufacturers in the United States, thanks to the PREP (Public Readiness and Emergency Preparation) act. Anyone who suffers harm as a result of these government-funded elixirs will have to take it up with the government, not the manufacturer. Unlike normal businesses, which must bear the legal brunt of the negative effects of their products upon human beings, Moderna is like a child being allowed to roam free, its parents prepared to clean up any messes which may result. Perhaps Moderna will get lucky and have produced a miracle cure, but the statistics on new medical treatments are not that encouraging. Of 5,000 new drug candidates, only a tiny fraction of them (5 out of 5,000 or .1%) are judged from the animal trials to be safe enough to be tested on human beings. Of those which are tested on human beings, only 20% eventually achieve (regular) FDA approval and are taken to market (.02% of the original candidates). Of those pharmaceutical products which make it to market, some are eventually recalled. From January 2017 to September 2019, 195 drugs previously approved by the FDA were recalled because of safety issues.

Now, many people have died of COVID-19, and no one wishes for that to happen to themselves or anyone they know. It is also true that very ill and vulnerable people are often willing to gamble on experimental treatments. In the case of terminally ill patients, what do they have to lose? It is unclear, however, why any rational person not at risk of death from COVID-19 should want to offer up, without compensation, his healthy body as a Petri dish to a government-subsidized and protected industry with a well-documented history of not only deception and fraud but also what are arguably human rights violations, above all, in third world countries. Moderna, being new, with no products on the market, has a clean slate to date (all none of its products have had no untoward effects on human subjects), but the Pfizer, Johnson & Johnson, and AstraZeneca tallies of criminal fines and settlements are awe-inspiring, to put it mildly. No one ever said that human experimentation was going to be risk free, but the fact that billions of dollars in compensations have been doled out to people harmed by pharmaceutical and other chemical companies underscores a sober truth: it is inherently dangerous to introduce novel foreign substances into human bodies, even in the best of all possible research and development scenarios.

The spec sheets for both the Pfizer and the Moderna shots explicitly state that they “may” prevent one from getting COVID-19 (which implies, of course, that they may not), and that “There is no FDA-approved vaccine to prevent COVID-19.” These information sheets (which hardly anyone rolling up their sleeves appears to have read) also state plainly that “Serious and unexpected side effects may occur,” which should in any case be obvious since they were developed and tested over a course of months, not years (note: the average time to market for a new drug/device is 12 years). There simply is no longterm data yet—whether positive or negative. The makers themselves of these products rightly express ignorance as to their efficacy in preventing and transmitting disease, touting confidently only their therapeutic effect in reducing severe symptoms and diminishing the likelihood of death, both of which are in any case exceedingly rare for persons under the age of 50, according to all available statistical data. Feeling “blah” does not count, I presume, as a “severe symptom,” so it is unclear whether vaccination would have helped Tom Hanks at all. But who knows? One or more of these companies may succeed in producing a COVID-19 panacea, I mused. Until I remembered the problem of new virus variants.

The current slate of vaccines were developed against a dominant strain of COVID-19 last year, but the many variants, created through mutation and apparently numbering in the thousands, are by now so widespread that there are grounds for believing that even if the current vaccines work against the dominant strain, and even with strong vaccine compliance, vulnerable people will continue to die, sooner or later, while everyone else will be spared, not because of the vaccines, but because they were never vulnerable to the virus and its variants in the first place. As is always the case, given human variability, there have been some outliers, young persons who died or suffered harm from Coronavirus infection. On the other hand, more elderly people than one might surmise, given the media coverage, have survived. COVID-19 does not come close to being a death sentence, although the chances of dying are significantly increased for patients with comorbidities. Still, in some places, the average age of a COVID-19 victim is the same or even older than the average life expectancy of people more generally.

Curiously enough, persons who already survived COVID-19 are also being exhorted to get the vaccines, even though the very fact of their ongoing existence definitively demonstrates that their immune system is hardy enough to combat the virus. For other diseases caused by viruses and for which vaccines exist, the reason for getting the vaccine is to avoid at all costs getting the disease, which in cases such as Ebola and Yellow Fever are very deadly to anyone, regardless of age or comorbidities. But the vast majority of people infected with COVID-19 experience only mild symptoms and do not require medical treatment. Reflecting on these matters, I circled back to my previous concern: Why should any healthy person believe that taking an experimental vaccine is a good idea, particularly if they already survived COVID-19?

As I continued to mull over this question, I marveled at the massive media marketing budget for COVID-19. All of the circular stickers on the ground and all of signs everywhere relaying important information such as the permitted capacity of persons inside stores, all scientifically calculated to three significant figures to yield numbers such as the 163 shoppers admitted to the local TJMaxx at a time. Even more impressive have been the ads on television and the internet everywhere encouraging people: “This is our shot. Let’s take it!” among a slate of similarly benevolent-sounding slogans. People may feel better when others hop aboard the vaccine train, and they may attempt to shame those who do not, but does any of this behavior have anything to do with whether or not the treatments will ultimately work? It seems safe to say that neither the virus nor the vaccines have any interest in the hopes and aspirations of human beings. Ironically, the pressure being put on people—threatening the requirement of vaccination for travel, work and play, and the lavishing of praise upon those willing blindly to accept as-of-yet unknown risks—appears to be having the opposite of its intended effect.

If it were so obvious that the vaccines worked and were the only solution to our current predicament, then why would Queen Elizabeth take to the airwaves to denounce people who refuse to get vaccinated as “selfish”? Why would Tony Blair insist that we will not be free again until vaccine passports become available? Why did former Presidents Bill Clinton, George W. Bush and Barack Obama team up to produce a video in which they attempt to persuade people to get the vaccine? (Bush states in the ad, “The science is clear.” He was equally confident about Saddam Hussein’s WMDs.) Why would CNN be admonishing those congresspersons who have declined the vaccines made available to them, including those such as Representative Thomas Massie who have already recovered from the virus and therefore must have developed antibodies and T-cells in response? On its face, all of this propaganda seems vaguely insane, and it is scaring people away who might otherwise have agreed to participate in the experimental trials.

Sowing doubts even more effectively than appeals by confirmed liars in high places, more than twenty countries, including France, Germany, Italy, Norway, Finland, Thailand and, most recently, Canada, halted their distribution of the Oxford/AstraZeneca vaccine in response to a number of blood clot cases. When the cases in Norway were first reported, the trusty mainstream media went into overdrive, dismissing “baseless” claims of connections between the blood clots and the vaccine. It seemed strange to me that over the course of the past year, every person who died with COVID-19 was recorded as having died of COVID-19, while no one who died after vaccination was acknowledged to have been killed by the vaccine. The in some cases deadly blood clots were “purely coincidental” was the judgment decreed by journalists onboard the vaccine train (before the matter was even investigated) and echoed by parrots throughout Facebook and Twitter to assuage the fears of persons who might be discouraged by the news from rolling up their sleeves. Even after the AstraZeneca vaccination resumed in most of these countries, some of them changed their guidelines. France, for example, having initially claimed that the AstraZeneca vaccine showed no benefits to elderly persons, reversed course to decree that the vaccine should only be used on persons over the age of 55. Canada, for its part, announced that they would be administering the AstraZeneca vaccine only to persons between the ages of 50 and 65. The governments which stopped and then resumed vaccination claimed that they had done so out of “an abundance of caution,” but when some scientists concluded that there was indeed a connection between the blood clots and a rare autoimmune response elicited by the vaccine, they also jubilantly reported that they had found a possible cure for that problem. By all means, take the AstraZeneca vaccine, and if you develop blood clots in your brain, then we’ll give you some other treatment to save your life! (If you have no Big Pharma stocks in your portfolio, now might be the time to buy.)

Many businesses have joined in on the public relations campaign and are rising to the challenge of convincing their customers that vaccination is the way to go. Qantas, the largest Australian airline, has adopted the punitive approach, alerting everyone everywhere that they will not be boarding any of their planes without first presenting proof of vaccination. But one company has gone above and beyond to offer what may finally be needed to convert the intransigent skeptics: Krispy Kreme. The doughnut giant has announced that anyone presenting proof of vaccination at any of their stores will be entitled to a free doughnut. Mind you, this is not a one-off promotion. Every vaccinated person is being offered a doughnut every single day that they show up at any of the Krispy Kreme locations with their trusty vaccination card in hand. Needless to say, this propitious development necessitates a revision of the Pascal’s Wager assessment:

 

To Vaccinate or Not to Vaccinate?

 

Take the vaccine Don’t take the vaccine
The vaccine prevents transmission and infection  

Those who take the vaccine will be protected from COVID-19 and will receive a free doughnut every day.

 

Those who do not take the vaccine will not be protected from COVID-19 and will not receive a free doughnut every day.

 

The vaccine does not prevent transmission and infection  

Those who take the vaccine will not be protected from COVID-19 but will receive a free doughnut every day.

Those who do not take the vaccine will not be protected from COVID-19 and will not receive a free doughnut every day.

 

Luckily there are Krispy Kreme doughnut shops dotting the vast landscape of the United States, and, more importantly, there is one down the street from me. My fate, therefore, along with that of thousands, if not millions, of my fellow citizens (including, I presume, Representative Massie) is now sealed. I will be rolling up my sleeve, not because I believe in the novel m-RNA vaccines, nor because I think that it is in my best interests to undergo an experimental treatment for a disease to which I am not vulnerable and from which I have already recovered, nor because George W. Bush and Tony Blair want me to, nor because I care what Queen Elizabeth thinks of me, nor because the only way I can ever travel to Australia again will be to “get the jab.” No, I will be rolling up my sleeve for the sole purpose of receiving a free doughnut every day henceforth. I trust that, in recognition of the Krispy Kreme executive team’s manifest magnanimity, the government will confer upon their company the label “essential business” to protect it from revenue loss in the event of any future lockdowns.

The Pentagon Turns ‘Feminist’

The Pentagon Turns ‘Feminist’

For many years, male U.S. citizens have been required to register with the Selective Service, an independent agency within the Executive Branch of the U.S. Federal Government, so that they can be located in the event that it becomes necessary to reinstate military conscription. The most recent military draft was ended after the Vietnam War, in 1973, and ever since then people have proudly pointed to the “voluntary” terms of U.S. military enlistment. That soldiers are voluntary is also frequently invoked in passing by cynical civilians who dismiss complaints about the plight of soldiers during wars and their aftermath. Those wont to insist, “They freely chose to enlist!” not-so-slyly suggest that perhaps we should not care so much about the thousands of homeless veterans and the epic levels of suicides among distraught soldiers, who by 2019 were ending their lives at rate of about twenty per day.

In recent years, the question whether women should be permitted to serve as combatant soldiers has arisen, as more and more other professions have opened up to what historically has been regarded as “the gentler sex.” Until quite recently, the fighting forces of the military were always viewed as the province of men, but times have changed, causing some people to reconsider the longstanding association of the military with masculinity. There are essentially two standard arguments regarding female combatants.

First, according to what might be called the “traditionalist” approach, women are generally smaller and physically weaker than men. Their admission into the ranks alongside the physically strong males who have fought enemy soldiers one-on-one on bloody battlefields throughout history would severely compromise the military’s capacity to win its battles and, ultimately, the government’s wars. A second strand of the traditionalist view focuses on the idea that women should not be sacrificed needlessly. Women have historically been viewed as nurturing and less aggressive than men. If women were deployed evenly among the men fighting on the ground, then they would be more likely to perish than their male counterparts, not only because they are, on average, physically smaller and weaker, but also because they are less violent than men. But if women were eliminated, this would hurt society more generally, as women give birth to and often raise children.

The second approach, which might be termed “feminist,” holds that combatant selection should in no way depend on one’s possession or lack of a Y-chromosome. It may be the case that women on average are weaker and smaller and less aggressive than men, but that does not mean that all of them are. Over millennia, women have far more often filled the role of mother than that of breadwinner, but, again, times have changed. Today a woman can choose whether or not to be a wife and mother. Some women today serve as the CEOs of military weapons companies or even heads of state. What it means to be a liberated woman is to be able to choose between the full range of opportunities available to men. Furthermore, there are certainly examples of extremely powerful women, such as Serena and Venus Williams, who might, if they chose to fight rather than play tennis, do quite well on the battlefield. Accordingly, on this view, women should be permitted to train and compete with men for spots in even the most physically demanding of military roles, up to and including the Marines or special operations teams such as the Delta Force. The way to find out whether a woman qualifies for such a force is precisely the way in which men find out whether they qualify: through basic and advanced training which leads some candidates or their commanders to conclude that they may be better suited for less arduous roles.

In 2015, the Pentagon appeared to adopt the second, more progressive or feminist, approach, announcing that all combatant positions would henceforth be open to women. The reality, I believe, is quite a bit more crass, as evidenced by the fact that not long after women were invited to serve as combat warriors, people began discussing whether women should, along with men, be required to register for the Selective Service, so that they, too, could be called up should another military draft be instituted. This move, from permissibility to obligation, from a triumph of feminism to the severe restriction on liberty and potential enslavement of women, the prospect of their being demanded to serve in the armed forces against their will, is a curious non sequitur which seems to have gone unnoticed by the soi-disant feminists who support Selective Service registration for all. The Pentagon public relations wing naturally claims “woke” creds, but what is really going on here?

I am afraid that the traditionalist approach (which still has its adherents, for example, Fox News host Tucker Carlson), altogether misses the point of the Pentagon’s invitation to women to join the ranks of military killers. For most “combatants” in future war will not be found on the ground battling enemy soldiers in one-on-one fights to the finish. Instead, unmanned combat aerial vehicles (UCAV), or lethal drones, will continue to be used, as over the course of the twenty-first century so far, to inflict death upon enemy “soldiers” who pose no direct threat to their killers. The risks in having both men and women fight in theaters such as the twenty-year War on Terror throughout the Middle East (which has also seeped into Africa) will become progressively less physical. Because of new technology, the primary harms suffered by future soldiers will be psychological and moral. This follows from the very logic of the use of drones to kill people abroad who cannot be threatening anyone with death because they are unarmed. Least defensible of all is the incineration of persons located in countries where there are no soldiers on the ground said to require force protection. Yet this is what drone operators are trained and required to do.

One of the most significant military discoveries in the twenty-first century, all but ignored by the warmakers themselves, is that Post Traumatic Stress Disorder (PTSD), does not emerge exclusively or always as a result of traumatic experiences on the battlefield, when soldiers are forced daily to face the specter of their possibly imminent deaths as they witness people dying all around them and move through dangerous territories where IEDs (improvised explosive devices) and snipers may be hiding any- and everywhere. Protracted fear and stress can be powerful factors in the onset of PTSD, but what we have learned from its high incidence among drone and laser sensor operators is that moral trauma and conscience also play an important role. Indeed, regret for what one has done is sufficient alone to induce profound PTSD, as evidenced by those drone operators who, in states of psychological and moral despair, have opted to abandon the profession at the termination of their initial contract, even when they have been enticed to stay by the provision of generous bonus offers.

On its face, the job of a drone operator may look like a good deal, and it did to those who later regretted and renounced their vocation: garner creds as a courageous warrior by donning a uniform and showing up to work in a trailer where one “fights” the enemy on a screen from thousands of miles away. No trenches, no IEDs, and no snipers—the drone operator himself remains unscathed, indeed, untouchable by the enemy. The physical job of a drone operator involves manipulating buttons and levers, observing the enemy on the screen and remaining alert, not as a way of saving one’s own life, but to make sure that the enemy does not get away. The images of what these soldiers see on those screens and have done to those people, however, sometimes come to haunt drone operators. Watching targets for hours, days, weeks, even months, before “splashing” them with a missile and witnessing them bleed out before dying, knowing in some cases that they are leaving behind widows and orphans, if not also first-order (physical) collateral damage, exacts a steep psychological toll on some of the push-button killers.

The military will continue to become progressively more lethal to the enemy but less deadly to its own combatant or killing forces because of the manifest rationality of not needlessly risking soldiers’ lives, and the development of technology which makes that possible. If a war can be won without sacrificing a single soldier, as former Secretary of State Hillary Clinton claimed President Barack Obama did when he ordered hundreds of missile strikes on Libya in 2011, then why would any commander choose to do otherwise? This risk-averse approach to war began in earnest with President Bill Clinton, whose combat pilots flew high above their targets in Kosovo in 1999 in order to protect themselves from harm, despite the fact that by doing so they increased the risk of killing civilians on the ground. Presidents, along with the populace, care more about their compatriots than “collateral damage” victims abroad, who, being out of sight, are also out of mind.

The Libya intervention was quite far from being a success story, much less an example of, as Clinton gushed, “smart power at its best,” but it is true that no combatants were killed during the 2011 ousting of then-President Muammar Gaddafi. Ironically, U.S. State Department employees were killed in the post-war mêlée, but that was after the bombing had stopped. The country of Libya is now in shambles, but the Benghazi debacle, along with everything else which ensued subsequent to the bombing campaign, is simply written off by its architects, including former U.S. Ambassador to the United Nations Samantha Power (recently pegged to head up USAID), as unpredictable, unforeseeable consequences of a military intervention with purely humanitarian aims. In attempting to convince Obama to take action, Power compared the situation in Libya to that of Rwanda in 1994. Remarkably, having been initially disinclined to intervene, Obama was persuaded to believe Samantha Power, Susan Rice, Hillary Clinton and Anne-Marie Slaughter, the women who rallied for that war—a veritable case in point for those who claim that women can be just as aggressive as men. But was the post-war scene in Libya completely unforeseeable and unpredictable, as Power glibly maintains in her memoir? We may beg to differ with those armchair warriors who failed to draw appropriate inductive conclusions from the fall of the Taliban in Afghanistan or the removal of Saddam Hussein in Iraq, but, alas, they seem keen to ply their bellicose trade wherever and whenever it becomes possible again.

Thousands of people at the Department of Defense work full time in public relations, producing texts and media to persuade taxpayers that the government’s wars are just and right. One might reasonably wonder why, if all of the ongoing wars were in fact worthwhile and necessary undertakings, there should be any need for public relations campaigns to support them, or to lure young people to enlist. But because the necessity and justice of the nonstop bombing of people in the Middle East is far from self-evident, those paying for the carnage must continually be made to believe, against all evidence, that the soldiers killing people abroad today are just like the courageous men who defeated the Nazis in World War II. Snafus such as the photographs from Abu Graib prison must be explained away, and the military’s image re-burnished to ensure that young people will continue to enlist.

The “feminist” turn at the Pentagon, I submit, is just another ploy to address the recruitment crisis at a time in history when the skills required of the latest supply of cannon fodder have become significantly less physical. More drone operators are trained today than regular combatant pilots, and at some point the idea of risking one’s own life for one’s country will be deemed anachronistic and quaint. Robots with “boots on the ground” have been deployed for years, especially to assist troops in landmine-infested territories. “Ground force” robots have also been used to blow up targets, as was done, unbelievably enough, to U.S. military veteran Micah Xavier Johnson, in Dallas, Texas, on July 8, 2016, after he killed five members of the local police force. The replacement of mortal soldiers by robots will be further precipitated by the inexorable production of Lethal Autonomous Weapons Systems (LAWS), which will take human beings completely out of the killing loop once robotic killers have been programmed to gather, sort, and analyze data before selecting targets and launching missiles. Until the military has become entirely automated, however, it will continue to need human operators, and that is why women have been enthusiastically invited to join in on the killing spree.

The invitation to women to serve in combat forces has been billed as progress, evidence of how “woke” the Pentagon is, along the lines of President Biden’s appointment of the first African American Secretary of Defense, General Lloyd J. Austin. But, as in the case of Austin, the admission of women into combat forces has a subtext. The far more relevant factor in the case of Austin is his connection to military industry, the fact that he is a former board member of a company (Raytheon) which stands to profit every time Syria or anywhere else is bombed. The surface “wokeness” is just a patina, a veneer, a bit of public relations polish on what is ultimately an intrinsically pragmatic policy. The fact that Austin is black is an effective distraction from the reality of the ever-more tentacular MIC or, to be precise, the military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex. Military industry, which is funded by the Pentagon, has also gloated over its female CEOs. Meanwhile the crisis levels of sexual abuse by fellow soldiers and commanding officers of female enlistees has been largely ignored by the military-infiltrated mass media.

The admission of women troops as combatants is not so much an affirmation of the worth of female human beings as it is a recognition that they, too, can be trained to serve as push-button contract killers. There is an ongoing, chronic military recruitment crisis because service in the bungled missions in Afghanistan and Iraq has seemed progressively less honorable as the quagmires have dragged on. Many people were willing to enlist after the terrorist attacks of September 11, 2001, but by now nearly no one (aside from war profiteers) seems convinced of the righteousness of the forever wars in tThhe Middle East. In order for those wars to continue on, new, psychological, cannon fodder must be found. Step right up, ladies, we have a splendid job for you, complete with benefits, pension and paid maternity leave!

The issue of maintaining Selective Service registration for men alone is now before the U.S. Supreme Court, and it would seem that, in consistency, the entire program must either be abolished or expanded to include women. Under a faux-feminist guise, some “patriots” among the U.S. Congress (an extremely important limb of the octopoid MIC) will likely rally for the expansion, which would be a severe blow to liberty lovers of all stripes, men and women alike. If it is unconstitutional to require men but not women to register for the Selective Service, now that women are permitted to serve in the armed forces, then the proper remedy can only be to abolish the Selective Service registration requirement, for involuntary service violates every person’s right to life, liberty and the pursuit of happiness, whether or not they possess a Y-chromosome.

The American Civil Liberties Union (ACLU) has raised the issue before the Supreme Court on behalf of a group of men, and it may well be that they favor abolition of the requirement. Nonetheless, should the current law, under the present circumstances, be struck down as unconstitutional, then the fact that the Supreme Court did not previously find the Selective Service registration of males alone to be unconstitutional will be invoked by hawks in the U.S. Congress to push for new legislation mandating universal registration, regardless of biological sex. The question which needs desperately to be debated now, however, is whether the creation of an entire society of push-button contract killers is something which anyone should support.

A Perfect Totalitarian Storm

A Perfect Totalitarian Storm

People often express consternation over how something as awful as the Holocaust could ever have transpired. It seems utterly incomprehensible, until one reflects upon the acquiescence to government authorities of individuals, most of whom served as unwitting cogs in a murderous machine. The vast majority of people in 1930s and 1940s Germany went about their business, agreeing to do what officials and bureaucrats told them to do and brushing aside any questions which may have popped up in their minds about policies preventing Jewish people from holding positions in society and stripping them of their property. For ready identification, Jews were preposterously made to stitch yellow stars onto their clothing. Later, in the concentration camps, they were tattooed with identification numbers. The rest is the most grisly episode in human history.

It is easy to say today, looking back, that we would never have supported the Third Reich and its outrageous laws, but citizens everywhere develop habits of submission to authority from an early age. Many “rule-governed” persons never pause to ask whether the current laws of the land are in fact moral, despite the long history of legislation modified or overturned in the eventual recognition that it was deeply flawed. It is understandable that people should obey the law—they are threatened with punishments, often severe, for failure to comply. But the little things do eventually add up, and one thing leads to another, with the result that the bureaucratic banality of evil diagnosed by Hannah Arendt in her coverage of the Adolf Eichmann trial in 1960 applies every bit as much to our present times as it did to the people going along to get along with the Third Reich. Of course no one is currently sending trainloads of “undesirables” to concentration camps for liquidation, but when one considers the death and degradation of millions of people in the Middle East over the course of the twenty-first century, carnage and misery funded by U.S. taxpayers, one begins to comprehend how the very mentality which permitted the Holocaust to transpire is indeed at work today. The vast majority of Western citizens freely agree to pay their governments to terrorize and attack, even torture, people inhabiting lands far away. The perpetrators call all that they do “national defense,” but from the perspective of the victims, the effects are one and the same.

The banality of evil at work today involves a profound complacency among the general populace toward foreign policy. President Biden bombed Syria about a month after becoming the Commander in Chief of the U.S. military, without even seeking congressional authority, and people barely blinked. The elimination of the persons responsible for the terrorist attacks of September 11, 2001, was achieved long ago. Yet military intervention continues on inexorably, having come to be regarded as the rule rather than the exception. The “collateral damage” victims are essentially fictionalized in the minds of the citizens who pay for all of the harm done to them. Habits of deference to the Pentagon and its associated pundits on matters of foreign policy have as their inevitable consequence that confirmed war criminals are permitted to perpetrate their homicidal programs unabated, provided only that they claim to be defending the country, no matter how disastrous their initiatives proved to be in the past. Indeed, it is difficult to resist the conclusion that the more mistakes a government official makes, the more likely it becomes that he or she will be invited back to serve again, and the more frequently his or her opinion will be sought out by mainstream media outlets.

It requires a type of arrogance to reject the proclamations of the anointed “experts,” and in the age of social media, there are always thousands of shills—both paid and unpaid—standing by to defend the programs of the powerful. Antiwar activists are very familiar with how all of this works. They are denounced as anti-patriotic, ignorant, naïve, and even evil for refusing to promote the company line. During the Cold War, the reigning false dichotomies of “Capitalist or Communist?” and “Patriot or Traitor” held sway and, sad to say, such false dichotomies abound today. The fact that the pundits and policymakers calling for and applauding military intervention themselves often stand to profit from the campaigns they promote is brushed aside as somehow irrelevant. In contrast, antiwar voices are muted, suppressed, and censored despite the fact that reasons for opposing more war cannot be said to be tainted by mercenary motives because peace, unlike war, does not pay. It costs nothing to not bomb a country, so anyone who speaks out against the idea is not doing so in order to profit. Yet such persons are denounced and marginalized in the harshest of terms as cranks, crackpots, extremists, Russia sympathizers and more. President Obama’s drone killing czar John Brennan famously organized terror Tuesday meetings at the White House where “suspicious” persons were selected for execution by unmanned combat aerial vehicles (UCAV), aka lethal drones, on the basis of flash-card presentations crafted from bribed intelligence, drone video footage and cellphone SIM card data—all of which is circumstantial evidence of the potential for future possible crimes. Brennan recently included libertarians among what he warned is an “unholy alliance” of “domestic extremists” in the wake of the January 6, 2021, protest at the U.S. Capitol. What happens next?

One certainly hopes that educated people are aware that Brennan’s inclusion of libertarians among his list of potentially dangerous domestic enemies betrays his utter ignorance of the very meaning of the word ‘libertarian.’ The non-aggression principle (NAP) embraced by libertarians precludes not only wars of aggression but also individual acts of terrorism. Sadly, it has become abundantly clear that the people still watching television news continue to accept and freely parrot what the mass media networks pump out despite their clearly propagandistic bias in recent years. Accustomed to heeding the prescriptions of “the experts,” people blithely listen to Brennan (and those of his ilk) despite his manifest record of duplicity regarding the drone killing campaigns, and his histrionic, even hysterical, comportment during the three-year Russiagate hunt for a Putin-Trump connection.

Neoliberal and neoconservative powerbrokers naturally wish to quash alternative viewpoints, so perhaps no one should be surprised that Brennan has attempted to discredit libertarians. After all, they pose disturbing questions such as whether all of the mass homicide carried out in the name of the nation actually helps anyone, including those paying for the carnage, or rather harms everyone, with the notable exception of those who stand to profit financially or politically from the wars. What Brennan revealed by lumping libertarians together with “domestic terrorists” is that he is not so much concerned with violent threats to the nation but with dissent from the political and warmaking authorities, a tendency which is becoming more and more marked as the Democratic-controlled Congress attempts to force Big Tech companies such as Facebook and Twitter to “do more” to prevent the dissemination of so-called disinformation. By denouncing some of the most articulate, consistent and persistent opponents to the war machine as “dangerous,” Brennan made it more difficult than it already was for those voices to be heard much less heeded.

The current complacency of people toward U.S. foreign policy is nothing new. Contemporaneously, people any- and everywhere tend to go along to get along, whether or not they are convinced that the policies imposed upon them and their fellow citizens make any sense. In 1930s Germany, anti-semitism was real, but part of the reason for the efficacy of the nationalist fervor drummed up by Adolf Hitler and used to support his quest for total global domination was the dire economic situation following the loss of World War I. Germany was weak and its people hungry. These conditions made it easier than usual to persuade people to comply, in the hope that their lives would be improved by banding together against what was denounced at the time as the evil enemy.

This perennial Manichean trope of political propaganda has most recently emerged in the abject, overt, hatred by about half of the people of the United States of anyone having anything whatsoever to do with Donald Trump. “Trump Derangement Syndrome,” or TDS, is a genuine phenomenon, at least judging by the comportment of people online and sometimes in person as well. As bizarre as this may seem, people actually hate people who do not hate Donald Trump, having failed to understand that contradictions and contraries are not one and the same. It is entirely possible to not hate Trump while also not loving him, but attempting to elucidate this false dichotomy to anyone who spent the last four years of his life wishing fervently for the former president’s demise will be met with an even more strident repetition of the very dichotomy being debunked. Again, if you happen to believe that the post-presidential impeachment trial was a waste of time and taxpayer money, then you must, according to the anti-Trump mob, love the former president. Even more remarkably, somehow over the course of the past four years a large swath of people have come to believe that seething hatred is a moral virtue, so long as it is directed at appropriate objects of loathing. But the capacity to hate one’s fellow human beings reveals absolutely nothing about the hater beyond his or her ability to hate. It certainly does not mean that they are good by contrast, and it is no mean feat of self-deception to come to believe that because one hates Donald Trump, this alone suffices to establish one’s moral superiority over all of the people who do not.

Once people become convinced of their own moral righteousness in the battle against whoever has been designated the evil and benighted (deplorable!) enemy, then it’s only a few short steps from “The end justifies the means” to “Everything is permitted.”  A glaring example has been the more and more prevalent suppression and erasure of so-called disinformation, which of course lies in the eyes of the censors. The necessity of defeating “the enemy” became the basis for such curious developments as the refusal of any of the mass media networks to investigate the pay-for-play connections suggested by the contents of the Hunter Biden laptop made public during the 2020 presidential election cycle. Immediately following election day, when some people pointed out anomalies such as the appearance of vertical lines in the graphs of vote tallies in the middle of the night in multiple states—indicating the sudden addition of troves of votes none of which were for Trump—the mass media immediately, in concert, issued headlines everywhere proclaiming that any and all charges of electoral fraud were “baseless”. The point here is not that the charges were not baseless, which perhaps they were in some cases—those explained away by local election authorities as clerical errors. But no one could know that allegations of electoral fraud were baseless before the matters were investigated.

The slippery slope of censorship is difficult to resist, having taken the first step onto that totalitarian-veering path, and the removal from social media of thousands of conservative and right-wing accounts regarded as sympathetic with Trump and his gallery of rogues is simply not enough, according to Democratic Party elites. Despite having already propagandized much of the mainstream media (as was evident in the election and post-election coverage), the Democrats, giddy with their majority Blue-Blue-Blue capture of Washington now wish to exert total control over what people may say, write and read. This is of course a violation of the First Amendment of the Constitution of the United States, but by achieving their goal through the indirect manipulation of private companies, which are subject to federal regulation and therefore receptive to “innuendos” on the part of legislators, they are hoping that no one will notice what has transpired—at least not before it is too late to do anything about it.

After Trump’s acquittal in the second Senate impeachment trial, the news coverage claiming that he had incited “insurrection” at the Capitol continued on, as though the facts had already been established and the outcome of the trial was entirely irrelevant. These Associated Press (AP) excerpts are typical:

“The only president to be impeached twice has once again evaded consequences…” (February 13, 2021)

“After [Trump] incited a deadly riot at the U.S. Capitol last month…” (February 14, 2021)

One might with reason wonder whether the wrongness of questioning the outcome of an election does not imply the wrongness of questioning the outcome of a trial. Of course both are perfectly permissible in a society which champions freedom of speech. What this political control of the news reveals is a republic in crisis, for if even supposedly objective news outlets such as the Associated Press reject the outcome of processes intended to ascertain the truth, then the people have no way of being able to determine what actually transpired. Similar examples of journalistic léger-de-main abound in every area of importance to neoliberals, above all, in matters of war, and the mainstream media’s refusal even to discuss the plight of Julian Assange is a case in point. Assange made public evidence of war crimes committed by the U.S. government but is now being persecuted as though he were a murderer. So pathological has the mainstream press become that the only times they were able to bring themselves to praise Trump was when he ordered military strikes on the people of the Middle East.

The tech outlets have now also decided to censor alleged disinformation about the experimental mRNA COVID-19 vaccines, conflating the criticisms of persons opposed to all vaccines (the antivaxxers) with those of persons who have read the spec sheets, are aware of the data on disease prognosis, and find that the risk of possible, as-of-yet unknown, longterm side effects are not outweighed by the alleged benefits of the novel technology (which, it is worth pointing out, never made it past the animal trials when it was tested in the past). Those who express concern about the Procrustean lockdowns have also been subjected to suppression of their speech. The Facebook page for the Great Barrington Declaration was taken down by censors, and Robert F. Kennedy Jr.’s Children’s Health Defense organization has also been deplatformed. But the criticisms offered by these groups are grounded in scientific literature. Indeed, the authors of the Barrington Decree are in fact epidemiologists and public health scientists, yet they are summarily dismissed as quacks because they disagree with the Fauci-Gates program.

What the vast majority of people want is for the current abnormal situation to be stabilized. If that means embracing what the powers that be are calling “the new normal,” then so be it. Anyone who stands in the way of the needed changes—those who refuse to volunteer as unpaid subjects in the largest experimental trial of a novel medical device in history—are summarily denounced in the usual terms: selfish, deplorable, ignorant, inbred, racist, nutjobs, etc. It does not matter in the least whether any of the epithets are true. They are deployed indiscriminately against anyone who disagrees by the self-styled morally superior types who shill for the reigning political and corporate elites—often also for free.

The present circumstances offer the necessary prerequisites to totalitarianism. We would do well to heed the historical record and look closely at how Nazism and Stalinism became dominant outlooks for entire populations, despite the fact that large numbers of people were destroyed by them. The total control of the mainstream media, with a specific agenda being promoted, all alternatives suppressed and the extreme polarization of citizens under Manichean false dichotomies are everywhere on display. What’s more, in these COVIDystopic times, we are witnessing people struggling under the same economic hardships as were the people of 1930s Germany. What is worse, after a full year of nonstop television coverage of death tolls, with nearly no effort by any mainstream pundits to place the tallies into proper context and consider how many people were dying everyday before COVID-19 arrived on the scene, many citizens are understandably afraid.

Fear always brings out the worst in groups of people, who may team up against what they all decry as the evil enemy. But fear, hatred and self-deception conjoined produce a toxic soup, and we need not search the annals of the first half of the twentieth century to find evidence of this. Post-9/11, violent crimes against Muslim people (and other brown-skinned persons sometimes mistaken for “Arabs”) were on the rise. We are currently on a trajectory leading to a place where those who read the spec sheets for the “free” vaccines and then, based on that information, decline to roll up their sleeves, will be denigrated as criminals. The divisions being concretized between those healthy, robust people who agree to COVID-19 vaccination and those who demur are being strengthened by virtue-signaling campaigns making everyone who gets the vaccine believe, again, amazingly enough, that they are morally superior to those who do not. Even Britain’s Queen Elizabeth has come out publicly to denounce those who decline to participate in the experimental vaccine trials as “selfish.”

Technocrats the world over have been warning since at least April 2020 that the only way out of our current predicament will be to issue “vaccine passports” through which the healthy can be distinguished from the unhealthy. However, even if the first and second round of vaccines together work to prevent transmission and infection—which has yet to be established—those who have received them will not be protected from the new variants, and will need to submit to a third round of so-called booster shots, which in another six months will likely “require” a fourth booster, and so on. All of this would seem to imply that the “vaccine passports” being floated by government and corporate leaders will in no way ensure that the persons carrying them are not going to contract or transmit the latest variants of the virus. So what do they really mean?

The idea that those who have accepted COVID-19 vaccines are “fit to fly,” and to work and to socialize, or even to go outside, rests on a truly Orwellian redefinition of “healthy” as “vaccinated,” even as scientists continue to warn that the virus has already transformed enough to check the already questionable efficacy of the current crop of vaccines. Those who support the implementation of vaccine passports are fond of pointing out that people traveling to Africa are required first to be vaccinated against Yellow Fever. But COVID-19 is nothing like Yellow Fever, which kills half of the people it infects. The vast majority of persons do not need to introduce foreign substances into their body in order to survive COVID-19. Because the vaccines appear to mitigate serious symptoms and increase the odds of survival among vulnerable persons, they should of course be offered the option of vaccination, but it must remain their choice, since they alone will bear the brunt of any untoward side effects, which invariably arise in a small portion of the population with every vaccine.

In the Nuremburg trials, nonconsensual human experimentation was decried and judged to be a crime against humanity. But extortion, too, is a form of coercion and we should not be fooled by the latest Newspeak press releases in which “authorities” attempt both to cajole and to threaten us for defying their will. Former UK Prime Minister (and confirmed war criminal) Tony Blair has determined that vaccine passports will be our ticket to freedom. This is a shocking pronouncement because our freedom is not his or anyone else’s to withhold from us, least of all when our own person and body are at stake. It’s as though we are currently inhabiting an episode of Black Mirror (Netflix), where the dark heart of pharma-technocratic rule is working to bend us to its will, using compliant citoyens as its unwitting tools. Peer pressure, shaming, bribes and threats are nothing new, but in this case the consequences could not be more personal.

History clearly demonstrates that one repressive measure leads to another, and totalitarianism creeps in step by step, unnoticed until it is too late. From the suppression of speech to the lockdown and quarantine of healthy people to coercing or extorting them to participate in experimental trials—none of this bodes well for the future of freedom. The fight to retain what are our rights—to speech, liberty, privacy, and the pursuit of happiness—and above all to not be treated as the possessions of government-funded corporations, must be defended while this is still possible. When a system is sufficiently infiltrated at every stratum by fanatics convinced of their own moral superiority and monopoly on the truth, then totalitarianism is near. It happened in Nazi Germany and it happened in Stalin’s Soviet Union. We are moving perilously close to that nightmarish reality right here and now as people redefine basic terms such as ‘sickness’ and ‘health’ and insist on exerting total control over information flow.

The Real Problem with Lethal Autonomous Weapons Systems (LAWS)

The Real Problem with Lethal Autonomous Weapons Systems (LAWS)

With the extremely rapid advances in technology made in the twenty-first century, many aspects of human life have transformed irrevocably. One of the most significant changes involves norms regarding the commission of intentional, premeditated homicide by governments. The practice is today termed “targeted killing,” but it differs only in the implement of death from what in centuries past was called “assassination” and deemed illegal. Black-ops involving shady assassins who stalk and eliminate perceived enemies under a cloak of secrecy are no doubt still carried out by governments. But the use of unmanned combat aerial vehicles (UCAV) or lethal drones to stalk and eliminate terrorist suspects in lands far away is openly acknowledged and has been largely accepted by politicians and the populace alike as one of the military’s standard operating procedures.

The use of lethal drones to kill rather than capture suspects began in Israel, but was taken up by the George W. Bush administration in the war on terror waged in response to the attacks of September 11, 2001. President Barack Obama then expanded the practice, electing essentially to eliminate the problem of longterm detention of suspects in facilities such as the prison at Guantánamo Bay by defining them as guilty until proven innocent and then dispatching them using missiles launched from drones. The suspects killed were classified posthumously as Enemy Killed in Action (EKIA) unless specific information demonstrating their innocence was brought to light. But since many of the killings took place in remote parts of the world, such as the Federally Administered Tribal Areas (FATA) of Pakistan, where there were few if any troops or intelligence analysts on the ground to do the sort of due diligence needed to establish the innocence or even the identity of the persons killed, this nearly never happened.

With the ascendance and spread of lethal drones, government officials have effectively permitted the current state of technology to dictate morality, rather than subjecting proposed tools to scrutiny before using them. This is most plausibly a result of the fact that the experts to whom politicians defer on these matters are invariably either military officers or persons with ties to military industry. Indeed, many military officers end up serving on the boards of weapons manufacturing and military logistics firms. The revolving door between government service and industry is evident in cases such as those of Dick Cheney, James Mattis and Lloyd Austin, all of whom served as secretary of defense and also sat on the boards of private military companies with sizable government contracts. From the perspective of military experts, whose focus is upon winning wars through maximizing lethality, the development of remotely piloted aircraft (RPA) has naturally been regarded as a boon, offering the possibility of combating the enemy without risking soldiers’ lives.

Yet in the development and spread of remote-control killing technology, important ethical considerations have been overlooked. First, during regular combat warfare, when troops are placed in dangerous situations, where “kill or be killed” becomes a prudential maxim for survival, many acts of killing can be construed as literal acts of self-defense. Whether or not the troops should be there in the first place, as in Iraq or Vietnam, is another matter altogether, but if a soldier is already in a perilous theater, with enemy combatants lurking around every corner, then the pretext of self-defense becomes reasonable. The same cannot be said for acts of killing perpetrated by soldiers sitting in trailers in Nevada, who are not being directly threatened by their targets.

U.S. combat soldiers on the ground in both Vietnam and Iraq killed many people who might have been insurgents but proved not to be. The veterans of those conflicts suffered enormously as a result, and many ended up permanently wrecked by the experience. Soldiers who use drones to target the enemy are far from the bloody fray and physically safe from the dangers of the “battlefield” on which they fire. Nonetheless, drone and laser sensor operators such as Brandon Bryant abandoned the profession after having become disillusioned with the disparity between what they had signed up to do (defend their country) and what they ended up doing, killing poor tribesmen living out in the middle of nowhere who were not threatening anyone with death at the time when their lives were abruptly ended.

Because drone operators follow and observe their victims for extended periods of time, and witness their anguish in the aftermath of strikes as they bleed out, they have been prone to suffer bouts of regret and develop post traumatic stress disorder (PTSD) despite never having been directly endangered themselves. Such reflective servicepersons furthermore recognize that collateral damage, said to be unavoidable in the “fog of war,” is truly excusable only in a life or death, do or die, dilemma. Up to now, what the drone and laser operators had to fall back on was the fact that they were not in a position to be able to assess the value of the intelligence used to select targets. Their job was to locate and kill the person(s) said to warrant elimination by officers higher up in the chain of command. Accordingly, when mistakes were made, the blame ultimately rested with the analysts who had built the case for targeting on the basis of evidence gathered by drones, obtained through paid informants, and mined from cellphones. In other words, even if the drone operators themselves regretted having killed persons whom they themselves did not believe deserved to die, based on their own observation of the targets, some among them were still able to assuage their conscience by invoking the tried-and-true “invincible ignorance” line, according to which soldiers are not to blame when negative consequences arise from their having executed what to all appearances were legal orders.

But surely intelligence analysts, too, may suffer regret when obviously (or even possibly) innocent people are destroyed on the basis of the analysts’ marshaling and interpretation of the available data. Why not, then, take the fallible human being out of the loop altogether, thus minimizing the possibility of error and the human vulnerability to emotions which sometimes culminates in PTSD? If it was better for soldiers in trailers in Nevada to kill thousands of terrorist suspects throughout the Global War on Terror, rather than having them fly dangerous combat missions, would it not be even better to relieve all parties involved of the burden of having killed?

Despite the moral dubiousness of killing “enemy soldiers” who are not directly threatening anyone with harm, and a fortiori in countries where there are no allied combat soldiers on the ground said to require force protection from above, remote-control killing technology continues to be refined and extended with the aim of making drones both more efficient and more lethal. Consequently, a world in which robots “decide” whom to kill, as in dystopic films of the twentieth century such as Terminator, Robocop and their sequels, is no longer the mere fantasy of writers of speculative fiction. Lethal Autonomous Weapons Systems, with the proper-sounding “LAWS” as its acronym, are currently being pursued as the best way both to keep soldiers off the battlefield and also to minimize the errors invariably committed by all-too-human operators in drone warfare. From a purely tactical perspective, an obvious benefit of LAWS is that with this new technology, which takes human beings “out of the loop,” when mistakes are made, there will be no operator who must bear the burden of knowing that he killed people who did not deserve, much less need, to die. Indeed, arguably the most significant benefit to the military in rolling out LAWS will be the elimination of PTSD among drone operators who deeply regret their participation in the serial, mass killing of persons who posed no direct threat to their killers when they were incinerated by missiles launched from drones.

With LAWS, the responsibility for mistakes made can be almost completely diffused, for computers will not only gather and analyze the data, but also select the targets on the basis of that data, and then launch the missiles themselves. The magnitude of the mistakes made will vary from case to case, but so long as human beings are involved in the construction and programming of the machines used to kill, then the potential for error will obviously remain. There may still be a bit of room left for soul searching among those who programmed the computers, but they will always be able to absolve themselves by pointing to the inherent limitations of data collection. Without perfect information, mistakes will continue to be made, but the lengthier the causal chain, the fewer individuals there will be who feel the need to shoulder any blame.

From a tactical perspective, all of this may sound very logical and clearly better than having soldiers risk their lives, and analysts and operators suffer psychological distress upon learning that they contributed to the carnage when innocent persons are erroneously destroyed. The first premise in the inexorable march toward Lethal Autonomous Weapons Systems, however, that the killing will happen, with or without human operators and analysts, needs to be subjected to scrutiny. What has propelled the mad rush to develop and implement LAWS is the false assumption that the killing ever needed to happen in the first place. The governing idea has been that because the persons being targeted have been determined to be potentially dangerous, they might undertake to threaten people at some future time, if they are permitted to live. In other words, the victims are being preemptively eliminated, following the reasoning used to promote the 2003 invasion of Iraq, when the warmakers claimed that Saddam Hussein posed a threat to the world because of his alleged possession of weapons of mass destruction (WMD). That pretext was of course later found to have been false, along with others, including the claim (obtained through torture) that the Iraqi dictator was somehow in cahoots with al Qaeda. Yet the war went on all the same, with some pundits and war supporters filling the justificatory void with the tried-and-true need to spread democracy.

In the maelstrom of the wars on Afghanistan and Iraq, assassination was simply rebranded as targeted killing, when in fact both practices involve the intentional, premeditated elimination of persons deemed potentially dangerous. This criterion is so vague as to permit the targeting of virtually any able-bodied person who happens to be located in a place where terrorists are suspected to be. The only differences between assassination and targeted killing are the nature of the weapon being used and the fact that soldiers wear uniforms, while undercover assassins and hitmen do not. But are these differences morally relevant?

Unfortunately, over the course of the more than twenty-year Global War on Terror, there has been no attempt to reckon with the facts. But if the war on Iraq was a violation of international law, then every person killed in the conflict was the victim of a crime. Because of the shock of the events of September 11, 2001, however, most of the people who pay for the military’s killing campaigns have gone about their business, allowing the government to use their tax dollars to kill people who had nothing to do with the terrorist attacks, and in many cases were protecting their own land from illegal invaders. Twenty years on, the military continues to kill people when and where it pleases under the pretext of the need to fend off the next terrorist attack. That millions of persons have been killed, maimed, widowed, orphaned, reduced to poverty and/or rendered refugees as a result of the ever-expanding missions of the U.S military in the Middle East and North Africa—most of which were caused by overspill of previous missions, beginning in Afghanistan and Iraq—has been largely ignored.

The “killing machine” has been on autopilot for some time now, in the sense that lists of targets continue to be drawn up and dispatched with the killers themselves writing the history of what transpired. The wars on Afghanistan and Iraq gave rise to new terrorist groups such as ISIS, which then spread to countries such as Pakistan, Yemen, Libya, Syria, Mali, and beyond. Subsequent interventions in those lands then led to the spread of factions throughout Africa, where drone bases have been erected in several countries to deal with the problem of radical Islamist terrorism. With LAWS, the perpetual motion targeting of unnamed persons can be expected to be revved up to run even faster, around the clock, for robotic killers suffer neither compunction nor fatigue, and the success of their missions will continue to be measured by the number of “dead terrorists”, who in fact are suspects. In other words, the ethical problem with LAWS will remain precisely the same as the ethical problem with the drone program through which human operators have pressed the buttons to launch the deadly missiles.

The debate over LAWS should not be over how to make robots act as human beings might. Rather, we must pause and back up to ask why anyone would ever have thought that this rebranding of assassination as the military practice of “targeted killing” should be permitted in the first place. The fallacy in thinking that lethal drones and LAWS “protect the troops” derives from the assumption that the people being killed would have been killed had this technology never been developed. The truth, however, is that the many drone bases now peppering the earth have served as a pretext for launching missile attacks which would otherwise never have occurred. With such tools at their disposal, military and political administrators are apt to use them without thinking through the moral implications of what they are doing, specifically ignoring the long-fought advances in criminal justice made over millennia, above all, the presumption of innocence upheld in free societies the world over.

Drones were originally deployed for surveillance purposes, but it did not take long before they were equipped with missiles to provide a dual-function machine capable of both collecting data and taking out enemy soldiers based on that data. Most of the individuals eliminated have not been identified by name, but in some cases specific persons have been hunted down and killed, as in President Barack Obama’s targeting of U.S. citizens Anwar al-Awlaki and Samir Kahn in Yemen in October 2011, and Prime Minister David Cameron’s killing of British nationals Reyaad Khan and Ruhul Amin in Syria in August 2015. More recently, on January 3, 2020, President Donald Trump targeted top Iranian commander Qasem Soleimani, who was located in Baghdad at the time. Trump openly avowed that the act of killing was intentional and premeditated. According to the president, the major general was responsible for past and future attacks against the United States. All of these eliminations of specific, named individuals would have been considered illegal acts of assassination in centuries past but are today accepted by many as “acts of war” for the simple reason that they are carried out by military drones rather than spies.

The ethical problems with lethal drones have been raised many times by activists, who have protested the killing of persons in countries such as Pakistan, with which the United States is not even at war, and also by successive U. N. Special Rapporteurs on Extrajudicial, Summary or Arbitrary Executions (Philip Alston, Christof Heyns, et al.), who have repeatedly cautioned that the extension of the right to kill anyone anywhere at the caprice of the killers, which has been assumed by the U.S. government in its wide-ranging drone-killing program, can only sabotage the prospects for democracy in lands where leaders opt to eliminate their political rivals, facilely denouncing them as “terrorists” while pointing to the precedent set by the United States, the United Kingdom, and Israel. Needless to say, the literal self-defense pretext does not hold when leaders choose to use remote-control technology to hunt down and assassinate specific persons rather than charging them with crimes and allowing them to be judged by a jury of their peers. But, just as in the case of unnamed targets, when the victims of drone strikes are identified by name, they are invariably labeled terrorists, with no provision of evidence for that claim.

With LAWS comes the specter of fully normalized political assassination with no territorial boundaries whatsoever. The question, then, is not “how do we devise the best algorithms with which to program robotic killers?” Instead, we must ask why homicide should be used in cases where the decision to kill is clearly not a last resort, as it never is in drone killing outside areas of active hostilities, because no human being will perish if the missile is not launched. In expanding the drone program, the Obama administration carried out many “signature strikes,” where the precise identity of the targets was not known but their behavior was said to be typical of known terrorists. In addition, cellphone SIM card data was used to identify persons who had been in contact with other persons already believed to be terrorists or found to have connections to known terrorist groups. To execute persons on the basis of such circumstantial evidence of the possibility of complicity in future terrorist acts is a stunning denial of the human rights of the suspects, and flies in the face of the democratic procedures forged over millennia precisely in order to protect individual persons from being killed at the caprice of those in positions of power. This drone killing procedure in fact exemplifies the very sort of tyranny which finally led Western people to abolish monarchic rule and establish republican constitutions protective of all citizens’ rights. As the mass collection of citizens’ data continues on, such moral concerns are more pressing than ever before, for political leaders may decide to use their trove of intelligence to eliminate not only citizen suspects located abroad, but also in the homeland.

What needs to be done is manifestly not to make machines more efficient and lethal killers. Instead, we need to revisit the first premises which were brushed aside in all of the excitement over the latest and greatest homicide technologies deployed in the twenty-first century, when the U.S. government was given free rein to pursue the perpetrators of the crimes of September 11, 2001. That license to kill with impunity was never revoked, and to this day the drone killing machine continues to be used to destroy people who had nothing whatsoever to do with what happened on that day. With the diffusion of responsibility inherent to LAWS, a truly dystopic future awaits, as the criteria for killing become ever more vague and moral responsibility is fully diffused.

In Defense of Statues and Other Texts—All of Them

In Defense of Statues and Other Texts—All of Them

There has been a lot of discussion and some action on the question whether statues portraying or representing men currently regarded as scoundrels by self-styled “good people” should be permitted to stand. On its face, such a view would seem to imply that many of the public squares and buildings of the great cities of the world must be razed, which strikes me as a reductio ad absurdum. Pick any leader you like: Churchill, Truman, De Gaulle, in any country, at any time, and look closely enough at his record and you will find dubious decisions made with deplorable consequences. The leaders who saved the world from the Nazis may be considered heroes today, but that does not imply that they were somehow flawless, as is nowhere more obvious than in what happened in the Soviet Union after World War II, when millions of Russians became the victims of a regime which had worked with the governments of the United Kingdom and the United States to halt Hitler’s mad quest to conquer the world.

Looking at more recent leaders, there are a good number of libraries, institutions, and buildings dedicated to men such as George H. W. Bush, George W. Bush, and Tony Blair, who together wrecked Iraq after concocting bogus pretexts for the invasion of a sovereign nation. For his part, Barack Obama attacked Libya before leaving it in shambles and dramatically increased the use of lethal drones to kill suspects abroad, including U.S. citizen persons of color who were executed without indictment or trial. Obama also dropped bombs throughout his two-term presidency (an average of seventy-two per day in 2016), targeting seven different countries across the Middle East and Africa. The military policies of each of these men have caused untold human misery, yet buildings and foundations continue to be named after them.

Those who wish to raze statues and rename buildings are for some reason not talking about their contemporaries, and the idea of prosecuting men such as Bush, Blair and Obama at the International Criminal Court (ICC) for crimes against humanity does not seem to cross their minds. Indeed, we find celebrities such as Ellen Degeneres and former first lady Michelle Obama entirely willing to overlook the war crimes of their buddy George W. Bush. President Obama himself opted not to prosecute those responsible for the Bush administration’s widely decried torture of human beings at Abu Ghraib, Baghram, and Guantánamo Bay prisons, among other places. Obama claimed, “That’s not who we are,” but effectively left torture as an option on the table for other administrations, including his own. He also “solved” the problem of the extended detention and mistreatment of terror suspects never charged with crimes by defining them as guilty until proven innocent and incinerating them with missiles launched from drones.

Remarkably, despite the horrors perpetrated under their watch, the esteemed opinions of George W. Bush, Tony Blair, and Barack Obama continue to be sought out. As far as I can tell, many people are entirely ignorant of the foreign policy record of Barack Obama, whose reputation seems to have received a big boost by the brash and boisterous demeanor of his successor. Mention Libya to a fan of Obama, and you are likely to receive a puzzled look in response. It was not, of course, Obama’s intention to catalyze a resurgence of black African slave markets in Libya through his ousting of Moammar Gaddafi in 2011, but that was nonetheless one of the consequences. When it comes to relatively mild-mannered men such as George W. Bush and Barack Obama, the prevailing prioritization of intentions over consequences translates smoothly into a willingness to forgive the perpetrators of catastrophic campaigns of mass homicide along the lines of the tried-and-true just war line: They meant to do good. So powerfully does the assumption of good intentions among compatriots hold sway over people that even Henry Kissinger, despite his role in perpetrating and perpetuating the Vietnam debacle, which resulted in millions of deaths, has managed somehow to continue to be revered, at least in some circles.

The same charitable interpretation is not, however, extended to the men whose effigies have been damaged or destroyed all over the United States in something of a mad frenzy to decry them as evil, while highlighting the protesters’ goodness by contrast—if only to themselves. Dozens of statues and monuments have been vandalized—spanning the time period from Christopher Colombus to Ronald Reagan—but the “cancel culture” crowd has focused especially on what have been interpreted to be the racist overtones of effigies of Confederate soldiers and officers from the Civil War, as a result of which slavery was finally abolished. As educated people know, the Civil War did not commence as a simple one-issue battle over whether slavery should be permitted, any more than the United States entered into the mêlée of World War II “in order to save the Jews.” (What went on at the concentration camps was discovered upon, not before, the liberation.) At the end of a conflict, when history is written by the victors, moral motivations are invariably emphasized over what were originally political reasons for taking up arms. In the case of the Civil War, economic objectives among secessionists and federalists, including President Abraham Lincoln, were what gave rise to the war. Nonetheless, the abolition of slavery is naturally viewed as a felicitous consequence of the loss of the war by the Confederate army.

I am not interested in debating the virtues and vices of the many men throughout history who held slaves, as did some of the founding fathers of the United States, but would like to suggest, rather, that calls for the destruction of statues and the metaphorical burning of texts, better known as censorship, are misguided. This is, first, because such works have always and everywhere been the result of intelligent human beings’ acts of creation. It is true that nearly no one knows anymore who created the vast majority of public squares and statues. In the case of structures erected in ancient and medieval times, it is plausible that they were produced under duress by persons enslaved, because that’s how things were done back then. But the fact that the Roman Colosseum was built with the blood, sweat and tears of slaves made to realize the vision of non-slaves (Emperors Vespasian and Titus) does not imply that the structure should be erased from the face of the earth. To do so would accomplish nothing beyond depriving the world of memory traces of centuries past.

What we know about the more recent structures being defaced is that their production involved the industry and creativity of artists who were not slaves. There is a reasonable sense in which the statues can be viewed as works of art rather than effigies to bad men who no longer exist or insane calls to make slavery legal again. The idea that destroying such works will somehow diminish racism rests on the entirely false view according to which an artifact has a single, definitive, immutable interpretation. This is a profound misunderstanding of the nature of art. A statue of a Confederate soldier can be as much a tribute to all soldiers who fight and lose as it is to any individual racist’s beliefs. The concept of “invincible ignorance”, that at least half of all soldiers are fighting for an unjust cause (which follows logically from the fact that at most one side can be right—though both can be wrong), has throughout history been invoked to protect the soldiers on the losing side who followed what to all appearances were legal orders in promoting a mistaken leader’s cause. Again, a fine-grained study of history reveals that many men who fought on the side of the Confederacy were not doing so out of hatred for African Americans and were not in fact slave owners. Moreover, there were racists among the Union army’s ranks, and some among them did own slaves. It is true that, had the Confederate army prevailed, then slavery would likely have lasted longer in North America than it did. It is also true, however, that many other countries managed to abolish slavery without such a bloody and protracted war.

When we allow one person or small group’s interpretation alone, the least charitable of all possible interpretations, to dictate the meaning of a work, we are allowing them to delimit reality precisely in the manner of a tyrant. Works of art, like written texts, are by their very nature subject to multiple interpretations, and their meaning to an individual comes finally to be a function of that person’s background beliefs. Non-didactic artists and writers may not even know what “message” they were attempting to convey when they created their works, and they often discover meanings only after the fact, while considering the object from the perspective of a reader/interpreter. The fact that slavery was widespread throughout the ancient world does not imply that all ancient texts should be erased and artifacts destroyed. The fact that Aristotle may have believed that women were only partial persons does not imply that we must interpret his every word as expressive of that idea. Nor should the fact that a few pseudo-intellectual Nazis took the work of philosopher Friedrich Nietzsche as supportive of their ridiculous Weltanschauung lead us to burn all of Nietzsche’s books.

In a truly liberal society, such as that championed by nineteenth-century English philosopher John Stuart Mill, according to whom the “marketplace of ideas” is a cornerstone of democracy, the possibility of learning and correcting the errors in our ways mandates that we remain receptive to other people’s points of view. Even if we find the views of others despicable, this does not imply that they should be effaced. Practically speaking, it would be highly imprudent to prevent everyone who disagrees from speaking, because then we would never know who our true enemies are. But the primary reason for opposing censorship is that no censor has a monopoly on the truth. We are all wallowing in beliefs forged through a random assortment of arbitrary interactions, processing information as we encounter it, accepting some claims while rejecting others, based, ultimately, on how they cohere with our current worldview.

In Ray Bradbury’s classic novel of speculative fiction, Fahrenheit 451, the government is being run by leaders who believe that they are right and anyone who disagrees is wrong, and that books which promote other, “dangerous” world views must be destroyed in order to prevent poisoning people’s minds. The most famous works of dystopic fiction, including George Orwell’s 1984 and Aldous Huxley’s Brave New World, also caution against the perils of permitting small committees of fallible people to delimit the contours of reality as they please. Reading such works, in which censorship is a key component of governance, it may seem obvious that the worlds depicted are anti-humanist. These are worlds where intelligent persons are treated as criminals for having ideas which differ from what has been deemed the proper way of viewing things. Despite the popularity of such works of fiction in recent times, an ever-increasing number of people have come to believe that we should topple the statues of “bad men” and remove classic works of literature from school curricula. The next step down a slippery slope should have been predicted when all of this began: some people should not be permitted to speak in the public square, for their words may be dangerous.

The recent removal of President Donald Trump from both Twitter and Facebook, on the grounds that he allegedly used the platforms to foment violence, specifically, the raucous protest at the U.S. Capitol on January 6, 2021, has been applauded by self-proclaimed “liberals” all over the planet, despite the fact that Trump explicitly dissociated himself from those who wreaked the violence, and publicly denounced it as wrong. Note that no one held Democratic party leaders or candidates responsible for the many violent protests and widespread looting in cities all over the United States throughout 2020. Yet Michelle Obama, who in befriending George W. Bush let war crime bygones be bygones, wrote a two-page letter exhorting the Big Tech social media giants to permanently bar “infantile” Trump from sharing his words with the world, claiming that he was responsible for what happened on January 6. Her sentiments were echoed and amplified throughout the mainstream media, with many people parroting the refrain that it is the prerogative of private companies to conduct their businesses as they please, and offering a pithy response to those who disagree: “If you don’t like it, leave.”

It is true that the Big Tech companies are not bound by the First Amendment to the United States Constitution, which protects the free speech of citizens. Twitter and Facebook have been censoring, “curating” content, and banning users for years now. In recent times, and most aggressively during the 2020 election cycle, they have adopted the measure of attaching “child safety” warnings to posts, alerting users that content is dubious according to the company’s fact checkers. What happened in this case, however, was startling because one of the alternatives to Twitter, Parler, where people banned from Twitter or annoyed by their censorship or “child safety” warnings sometimes migrated, was removed nearly immediately from the Google , Amazon and Apple app stores, effectively suppressing the speech of everyone who did not like Twitter and left (as they had been told to do). The symbiosis between the government and Big Tech became undeniable when incoming President Biden tapped Facebook executives to be a part of his cabinet, and one hopes that lawsuits charging monopolization will put an end to the squelching of alternative views and ensure that history, though continuously rewritten by the victors, will not be entirely erased.

Donald Trump’s tweets were certainly a bizarre phenomenon in U.S. political history and surely one of the reasons for his polarizing presence the world over, with people revering and reviling him in close to equal numbers—at least judging by the outcome of the 2020 presidential election. For Big Tech to effectively uphold Hillary Clinton’s “deplorable” trope in favoring one half of the country over the other can only exacerbate the deep divisions already on display. Michelle Obama is entitled to her opinion, but so are the more than 74 million people who voted for Trump in the 2020 election. Donald Trump was the president of the United States for four years, and his often emotive Tweets are historical texts, for better or for worse, which should be accessible to all.

Unfortunately, the so-called liberal people who demand and applaud the silencing of Trump do not appear to understand what they are advocating. They do not appreciate that humankind has progressed over millennia as a result of a lengthy experiment in testing out various ways of looking at the world. To claim, for example, that the works of Mark Twain should be censored because they contain the word ‘nigger’ is to forget that societies have changed in part as a result of people’s having read such books. It is in fact arguable that to remove a statue of Robert E. Lee would do no more than to hasten the process of forgetting how wrong human beings were, for most of history, in upholding slavery, not only in the United States, but all over the world. Likewise, from an antiwar perspective, every monument to World War I (and they exist in many different countries) stands as a testament to the sheer insanity of sending millions of young men to their deaths for essentially nothing. Again, one look at the Vietnam Veterans Memorial in Washington, D.C., where the names of the many men who died in a misguided war are inscribed, serves as a reminder not to make the same mistake all over again.

To oppose the razing of historical structures on the grounds that they were commissioned by or represent “bad” people is not to deny that communities have the right to decide how to decorate their public spaces. Sometimes sculptures in parks and squares are removed or replaced, but to pretend that unilateral acts of vandalism will reduce racial tensions is delusive in the extreme. It is both puerile and false to assume that a single interpretation exhausts the value of any work of art or text. The truth is not a function of whatever arbitrary group of people happen to have acceded the corridors of power or whatever angry protesters have taken it upon themselves to wreck what they happen not to like, based on their own, necessarily limited, interpretations. Rational adults are aware that all people are fallible and that no one can pass strict “purity” tests.

Should the Roman Colosseum, which was not only built by slaves but also used as an arena for gladiator battles between slaves forced to fight to the death for the entertainment of the upper classes, be pulverized, and the land on which it stands be turned into a parking lot? That is the direction in which this childish movement is going. To cancel culture is to cancel the extant evidence of the process human beings have gone through to get where they are today. The Nazis attempted such a cancel culture project, denouncing modern forms of art as “degenerate” and permitting only aesthetic views in  conformity with their delusive project of furthering the Aryan race. Some may take offense at my comparison of these two cases, but they differ only in details, not in approach. Both define cultural history as having effectively ended with the current view of those who would erase the past and dictate all that may exist in the future. Both are obtuse, shortsighted and small-minded.

Despite their foibles and deficiencies and myopia and biases, some human beings nonetheless make the effort to contribute their intellectual energy to produce new works and texts for our consideration. We need not and will not like all of them, but to say that we should destroy them is tantamount to the tyrant’s decree “Off with their heads!” in response to the annoying dissenters who may emerge among his ranks. The censors tomorrow may not agree with your interpretations and may decide that you need to be canceled at their caprice. The grandest irony of all is that if such a view had prevailed in the pre-abolition United States, with the suppression of the texts and speech of all those who disagreed with the laws of the land at that time, then slavery would never have been abolished.

Welcome to Zombie Pharm

Welcome to Zombie Pharm

Imagine a world where you were required to cover your face and nose whenever you stepped out of your home or when anyone came to your door. No one would know when you smiled or frowned and the difficulty of communicating the ideas you attempted to share would eventually deter you from saying much of anything at all, frustrated as you would be by the annoyance of always having to repeat yourself. There would be no point in asking anyone questions requiring more than a “yes” or “no” answer, because more complicated replies would be muffled by their masks and not worth the time and effort needed to decipher.

Imagine a world where all inhabitants of a city, state or country were told where they could go and what they could do, not only in public, but also in their homes and in privately owned businesses. Healthy persons would be quarantined to prevent other people from becoming ill. Good citizens would be enlisted and surveillance and tracking apps used to identify anyone who refused to abide by emergency lockdown and curfew orders or the required hygiene measures. Small business owners would be fined for attempting to run their businesses or neglecting to enforce emergency laws. Employees would be arrested for attempting to go to work.

Imagine a world where private tech companies collaborated with government bureaucrats to censor your written speech. You would not be permitted to share texts which conflicted with the official story of whatever the authorities claimed that they had done and were doing and wanted you to believe. You could still write texts, on your own computer, but there would be nearly no one around to read what you had to say. The censors could not suppress texts faster than they could be written and shared, however, so some would slip through. This would necessitate visits from the state police to the homes of those who had attempted to incite violations of any emergency laws which happen to have been enacted by administrators to protect their constituents. Whether or not the measures actually helped anyone would be entirely beside the point, because everyone knows (from all of the “just wars” throughout history) that all that matter are the lawmakers’ publicly professed intentions to do good. The perpetrators of what were deemed dangerous texts would be arrested and taken away, if necessary, by force.

Imagine a world where journalists were required to promote the official government line in order to keep their jobs. No text or report which reflected poorly on the military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex would be allowed. A few of those who “indulgently” refused to comply might then impudently begin their own publications, such as The Intercept, issuing their interpretation of what was going on in the few sequestered places (made difficult to find by Google) where independent journalism was still possible.

Imagine a world where the independent media, too, had been infiltrated by persons keen to hold the line, to defend what they had been persuaded to believe (by hook or by crook, carrots or sticks) must be upheld as the truth. Anyone who attempted to share inconvenient “disinformation” or “fake news”, as it would be denounced, would then have their work edited to conform with “the official story”. The “traitors” (as they would be characterized) who disagreed would have two choices: either to stop writing or to flee to another place with even fewer readers than before, such as Substack.

Imagine a world where publishers who revealed crimes committed by governments would be subject to criminalization: arrest, incarceration, isolation, extradition and more. Those who exposed murderous crimes would themselves be treated as though they were violent criminals, even when they had never in their lives wielded any implement of dissent beyond a pen.

Imagine a world where oppressive lockdown and curfew policies were said to be necessitated by case surges of “infections” in persons many of whom, while testing positive, manifested no symptoms at all. Suppose that the tests being used were revealed to be notoriously inaccurate, by some estimates, 90% inaccurate. Yet the testing continued on, ever faster and more furiously, and the case surges would serve as the basis for preventing healthy people from living their lives. When vaccines emerged, everyone who tested positive before but survived would still need to be inoculated, because, the “Listen to the Science” crowd would insist, it might be possible to become reinfected. People who had already survived the dreaded disease would only know that they were safe and not a menace to public health if they took the new vaccines, whatever they were, and whether or not they had been demonstrated to prevent and transmit infection, and no matter what the unknown side effects might be. Because, obviously: Science.

Imagine a world where people with life-threatening diseases were required to postpone their treatment because another disease, 95% of whose victims were octogenarians or older, had been designated by select “expert” epidemiologists as more dangerous and life-threatening than cancer, heart disease, stroke, and the other top killers of human beings. Imagine a world where distraught and desperate people reduced to poverty and rendered homeless through not being permitted to work began turning to deadly drugs such as Heroin, sometimes Fentanyl-laced, with the result that, in some cities (such as San Francisco), more persons died of overdoses than of the disease serving as the pretext for the laws forbidding those people from working.

Imagine a world where citizens were required to undergo medical treatments not known to prevent disease but believed to alleviate the symptoms of a disease for which the vast majority of humanity suffer only minor symptoms. This would be undertaken in the name of public health, but the effect would be to harm some of those who were not vulnerable to the disease and essentially had been tricked or coerced (since uninformed “consent” is not really consent) into volunteering as subjects in an enormous experimental trial with the aim of determining the outcome of introducing into human bodies certain foreign substances deemed potentially profitable by the companies which produced them. Most people would line up enthusiastically for such vaccines on the basis of widely disseminated claims of 90% and 95% efficacy lauded by well-respected experts, with details about the “known unknowns” and “unknown unknowns” available only in the fine-print of a few more nuanced articles which nearly no one read.

Imagine a world where people who had already survived the dreaded disease and also had been vaccinated were nonetheless required to abide by all of the ongoing hygiene measures, from wearing a mask, to staying home, to taking more vaccines on a schedule determined by their government. There would be no need to explain what any of this was for because all good citizens would already be accustomed to reciting the mumbled refrains (behind their masks): “Extraordinary times call for extraordinary measures!” and “We’re all in this together!” Everyone would have to comply, everyone would need to be quiet, everyone would be required by law to roll up their sleeves for the clearly compelling reason that a global pandemic was tearing through the world like a tsunami, wiping out everyone in its path. Except that most of the victims were dying at the same rate and age as the actuary tables would have predicted even if the culprit virus had never arrived on the scene. And the death toll over the course of the year would be about the same as for any other year, but with a slightly different distribution in causes of death.

Imagine a world where you were required to present your health record on demand and you would not be permitted to enter stores, restaurants, schools, to work or to travel without first proving that you had agreed to participate in an experimental vaccine trial for a disease from which you were at minimal risk of harm. The local health authorities would determine when you needed to present yourself again, for a new treatment, as the virus in question could morph unpredictably over short intervals of time into something else, thus necessitating that you and everyone else on the planet prepare your bodies once again, just in case this time around it might be more dangerous to you and those around you.

Imagine a world where a healthy person’s refusal to undergo medical treatment for a potential future possible disease to which he was not vulnerable, according to all available statistical indicators, was taken as proof of his suffering from another disease, Oppositional Defiant Disorder (ODD), as clearly indicated in the latest edition of the Diagnostic Statistical Manual of Mental Disorders (DSM). The person thus diagnosed would be required by law to submit to whatever medication would make him more amenable to the other forms of medical treatment to which he was opposed because obviously there would be something very wrong with him, constituting as he would a grave danger to public health.

Imagine a world in which children were taught from an early age that it was unsafe to touch other human beings or to be touched by them. They would be required to wear masks and full-face plastic shields and to wash their hands frequently and to attend school by video conference because, they would be sternly instructed, otherwise they might kill somebody else’s parents or grandparents, even though they themselves were not ill. If the children found any of this a source of anxiety, they would be prescribed psychiatric medications to transform their view of the world, so that they would accept rather than reject what they were told were “the new normal” contours of reality.

Imagine that everyone around you embraced all of the above and undertook public shaming campaigns against anyone who disagreed. Their faces would turn red and they would shriek in righteous indignation, “Listen to The Science!” whenever anyone attempted to point out the manifest absurdity of what was going on. They would denounce as degenerate, anti-science, anti-vax ignoramuses anyone who pointed out that pharmaceutical firms are profit-driven, publicly traded companies whose success depends on their ability to develop, produce, and market new wares.

Beware the ‘Nurse Ratched’ State

Beware the ‘Nurse Ratched’ State

Advocates of minimal government have often warned against “The Nanny State,” which rears its ugly head whenever bureaucrats try to tell people what they should do and how they should live. There is a sense in which all governments do that, through the very enactment of laws, but Nanny-leaders mete out prescriptions which vastly exceed what can be fairly portrayed as an attempt to protect people from one another. An extreme example of this sort of overreach occurred in the United States during the Prohibition Era, with catastrophic consequences. Not only did outlawing the enjoyment of alcohol not prevent people from drinking, it actually catalyzed a massive expansion of organized crime all over the United States, as career criminals stepped in to provide people with the means needed to imbibe. No one wants to go to prison, which is why murder was on the rise throughout Prohibition, with blood flowing in some cities nearly as freely as whiskey and wine.

Such unintended consequences have arisen wherever recreational drugs have been outlawed, and experiments such as the one in Portugal, where drug-related deaths diminished significantly after decriminalization, may have helped to propel some in the United States to accept the legalization of marijuana. The state of Oregon recently went even further, by legalizing possession of small amounts of hard drugs as well. Just as economics played a major role in putting an end to the thirteen-year Prohibition fiasco, the voters of some states may have been persuaded to permit recreational drug use after having seen the massive tax revenues being collected through pot shop sales in states such as Colorado. Whatever the reasons may have been, the slow dismantling of the legal framework undergirding the “War on Drugs” is certainly a welcome development to anyone who rejects the Nanny State.

The trend toward tolerating alternative lifestyles more generally, however, conflicts starkly with what else has been going on in 2020, coincidentally one century after the ratification of the Volstead act. Policymakers attempting to save people from COVID-19 have pulled out all the stops—going above and beyond, in their view—to protect their constituents by issuing new and ever-changing edicts about how people ought to behave. This might be more tolerable if there were any genuine benevolence on display. Instead, what we are witnessing is an increasingly despicable effort to blame citizens for the failure of policies implemented in response to the arrival of the virus on the scene. When restrictions intended to stop the virus are imposed but cases and deaths then increase rather than diminish, this has been taken to prove to those crafting the new rules that citizens did not in fact do as they were told, and they are, therefore, responsible for the current state of the health crisis.

I have been in Austria, Wales, England and the United States over the course of 2020, and in each of these countries I was surprised to find the very same finger-wagging reproach of citizens by government administrators who wish to blame what is manifestly nobody’s fault on somebody else. All over social media, angry mobs continue to lash out at those who refuse to stay home or “mask up,” and many government leaders now address their constituents as though they were toddlers or, perhaps more aptly, the residents of Nurse Ratched’s ward.

This is a strange conception of government, according to which politicians do not work for the people who pay their salaries but instead are their guardians, who alone can decide what the populace may and may not do. The phenomenon is not unique to the dictators-in-waiting who run states such as California and Michigan. Citizens all over the world are continually being threatened by government officials that if the case numbers do not go down, then lockdowns will be ordered or tightened, and more businesses will be closed, and further restrictions imposed, as though anything anyone does at this point has an effect upon a virus which is nearly everywhere and beyond anyone’s means to control. This punitive paradigm may have been possible to uphold with a straight face until late October, and many on the cacophonous COVID-19 caravan in the U.S. and in the U.K. have ceaselessly carped about their own incompetent government’s response, contrasting it to the approaches of the admirable leaders of countries in the European Union and Oceania, who obviously knew what they were doing!

But then along came the resurgence of cases in Europe, particularly in countries which had been held up for months as shining examples of how a government ought to manage the crisis. Germany had tough lockdowns, mask requirements and probably the best contact-tracing program around. They restricted the entry of people from any country with an unacceptably high “infection rate” (scare quotes are necessary given the widely acknowledged problems with the PCR tests), and anyone at the border who did not present proof of not being COVID-19 positive was either quarantined or turned away (some were also fined). So how does one explain the new wave of “infections” all across Europe? It must be the case that the naughty plebeian Europeans were lying about their contacts, meeting in large gatherings, and brazenly violating social distancing and mask ordinances. None of the case surges throughout the Northern hemisphere has anything whatsoever to do with the fact that more people invariably fall ill with the onset of winter.

In the U.K., Prime Minister Boris Johnson issued in November a nationwide month-long lockdown order in response to a resurgence of cases which villagers tended to blame on the haughty Londoners—who obviously had been flouting the rules by partying and congregating in pubs and then spreading COVID-19 dust everywhere they went—from England to Wales to Ireland to Scotland, and back again! That was, however, not my impression. What I found upon my arrival in London at the end of October (before the new lockdown) were empty streets, shuttered stores, and restaurants and pubs with very few patrons. Realty signs were all around, and the place looked frankly like a ghost town. My train from Norfolk to London was nearly empty, as were all of the trains I took in the U.K. from July to November, when I finally decided to leave in exasperation at the abrupt and arbitrary cancellation and closing of any- and everything I might want to do and see.

Throughout this crisis, not only the governors of Democratic states in the U.S. but also the prime minister of Australia and the health minister of the U.K. have exemplified the Nurse Ratched mode of governance, repeatedly threatening their constituents with ever-sterner measures should the epidemiological situation not improve, under the assumption that case surges decisively demonstrate not that the policy initiatives were worthless but that people were not following the rules. Sadly, many citizens, terrorized by the mainstream media’s nonstop fear-mongering about COVID-19, have accepted this absurd blame game, which has broadened what was already, long before March 2020, the chasm dividing a populace torn in two. Unfortunately, the situation is likely to get much worse as those who blithely agree to do as they are told become increasingly intolerant of those who refuse to do the same. Yes, the small paper cups on trays will be coming your way soon. What will you do? People are already taking sides, and the ironies continue to multiply.

Leftists have often wielded the slogan “My Body My Choice” in protesting any attempts by the government to limit a woman’s right to obtain a safe abortion. It is highly ironic, then, that some among them should now be agitating vociferously for the universal vaccination of people worldwide against COVID-19. The “Listen to the Science” crowd immediately shuts down anyone who dares to suggest that the decision about whether to allow foreign substances to be injected into their own body should remain the prerogative of individuals themselves. They denounce anyone who resists the call to vaccination as “antivax,” even when they are not vulnerable to the disease in question and have no problem whatsoever with time-tested vaccines. Those who express any hesitation whatsoever to roll up their sleeves are ridiculed as “antiscience,” even when they are in fact scientists by profession. When none of those inflammatory insults work, there is always the tried-and-true “selfishness” charge: you are a selfish, heartless human being if you are not willing to vaccinate yourself to protect other people from death.

Let us look soberly at the scientific facts, setting to one side all possible conspiracy mongering about 5G, microchips, the World Economic Forum’s “Great Reset,” chemtrails or anything else. First, COVID-19 is highly contagious but nowhere near as deadly as the pandemics of the past, and it specifically targets elderly persons with other health problems. An overall 99.5+% survival rate is not the sort of danger which would ordinarily lead a healthy young person to undertake a risky regimen to protect him- or herself. Why “risky”? At the most fundamental level, because safe and effective vaccines have always required years to produce and test, invariably involving, as they do, unknown side effects. The reason for this can be summed up in a simple, undeniable phrase: human variability.

For any trait, sensitivity, capacity, etc., found in human beings, its distribution can be plotted over a bell curve with a tiny percentage of people occupying the extreme ends of the curve. Those people are the “outliers,” who will be much more (or less) sensitive to a particular environmental factor than is the average person. Perhaps the simplest way of thinking about this human variability and its relevance to the vaccine issue is in terms of food allergies. No one knows that they suffer from a peanut allergy, for example, until their body encounters peanuts. Similarly, a person with Celiac disease will discover this fact only upon consuming gluten. When vaccines are manufactured, they contain components with which a given person’s body may never have come in contact before. Most people will not be harmed by any of the components, as the vaccines have been rigorously tested on other animals even before human trials begin. Once extensive, long-term testing in large groups of human subjects has been completed, then the company producing the vaccine can assert with confidence that the risk to patients is quite low. The risk is never zero, however, just as the risk incurred by doing anything whatsoever is never zero. There will always be some people who are more sensitive than others, and they may end up being harmed by one or another of the components of any vaccine. There is nothing mysterious or conspiratorial about any of this, and in fact it is precisely why vaccine manufacturers insist that, before distributing their product widely, they must be granted indemnity in the event of the unforeseen and unpredictable side effects upon a tiny percentage of those inoculated.

All of this to say: there is always risk involved in taking a vaccine. People decide for themselves, for example, whether or not they should take the seasonal flu vaccine, the reported efficacy of which has ranged from 19% to 48% over the past five years. This implies, according to epidemiologists themselves (not “antivaxers” or conspiracy theorists), that more than half of the people vaccinated have not been helped by the flu shot in the least. Were any of them harmed? It is difficult to say, because people become ill and die all the time, and there are usually far too many variables working simultaneously to be able to single out the cause of post-vaccine harm, particularly when the subjects are already elderly and frail. Those who sing the praises of the annual flu vaccine, including the public relations teams behind the aggressive marketing campaigns launched by governments to encourage their citizens to undergo vaccination, generally seem to believe that the efficacy rate is much higher than it is. From a consideration of the marketing material alone, one would be forgiven for concluding that the flu shot is rationally obligatory and 100% effective and safe. Having once examined the statistics, however, there is some cause for restraint.

Just as no one should be able to force you to drink green tea because they believe that it is good for your health, and no cancer victim can be compelled to undergo chemotherapy against his own will, individuals themselves must decide whether rolling up their sleeve for the annual flu shot is a good idea or not. Those who are young and hardy will most liking survive the flu in any case, and there is a real chance that the vaccine which they take—there are multiple versions every year—will not help to combat one or another of the virus strains which they happen to encounter anyway. It is literally a gamble. There are people who maintain that they never became sicker than after having taken a flu shot, but vaccine advocates quickly sweep in to silence them by insisting that they must have already been exposed to the flu before inoculation. In fact, the only reason for believing such an explanation is manifestly that one wishes to support universal vaccination. It may or may not be true. One thing is undeniable: pharmaceutical firms are profit-driven companies, whose revenues will wax or wane with general public sentiment about the wisdom of their many-splendored cures.

The current situation is quite a bit murkier than the case of the seasonal flu shot, because most of the COVID-19 vaccines being developed employ a novel RNA technology never before licensed for use in human beings. In the vaccines which have stood the test of time (measles, polio, etc.), a tiny amount of pathogen protein is introduced into a patient’s body so that it will preemptively ready itself for an immune response in the event that the virus is later encountered. Usually the virus matter introduced is dead, but sometimes it is live, and this is by design—it depends on the pathogen and is determined through extensive experimentation. A live vaccine induces a minor bout of the disease, which is much less likely to lead to death than is an unprotected body’s encounter with the wild virus. Anecdotally, I can report that after having received an obligatory Yellow Fever vaccine (which is live) before traveling to Ghana, I was quite ill for about five days. The cause and effect was clear: I was suffering a minor bout of Yellow Fever, thanks to which my body developed the antibodies needed to protect me from the disease during my trip to Africa.

Suppose, now, that the new COVID-19 vaccines worked just as the time-tested vaccines. In that case, before agreeing to be inoculated, a reasonable person would require some sort of assurance that the vaccine itself will be less likely to harm the patient than is the wild strain of the virus. Because the survival rate among people exposed to COVID-19 is greater than 99%, it would be prudent for a person to take the vaccine only if their prospects would be improved through vaccination. At this disease risk level, without any such guarantee, one may or may not wish to take an experimental vaccine. People in the vulnerable categories, advanced seniors and those who are exposed regularly to the disease in healthcare contexts, may well feel that it is worth the risk, and they will likely be first in line for the vaccines once they are made available.

It is of utmost importance to bear in mind, however, that the vaccines currently regarded as most promising for controlling the outbreak of COVID-19 do not involve the time-tested approach. Rather than introducing proteins from the offending organism (or a simulacrum), the front-runner vaccines introduce foreign pieces of viral RNA (ribonucleic acid) which will instruct the person’s own body to produce the immune system-galvanizing viral proteins itself. The presence of those pseudo-foreign proteins (coded for by foreign RNA but produced within the human body), will then initiate the needed immune response. In other words, there is an extra step involved. The foreign RNA is introduced, then the person’s body produces the proteins coded for by the snippets of RNA, after which the needed antibodies will be generated by the body in response. This ingenious scheme (if it works!) involves the human body tricking itself into triggering an immune response by producing what are empirically indistinguishable from traces of the offending virus itself. What could go wrong?

Perhaps nothing will go wrong, but the fact (of science!) remains: such vaccines have never been used in human populations before. In attempting to discuss this matter with various people (civil discourse is not always possible with the “Listen to the Science” crowd, ironically), I have been amazed that there should exist persons fully prepared to agree to totalitarian control over their very own bodies while knowing absolutely nothing about the history of vaccine development. They simply do not care that the novel vaccines are novel, nor that those who volunteer to take part in the largest experimental trial of vaccines in human history are essentially offering their bodies up as Petri dishes to pharmaceutical firms. Some vaccine enthusiasts appear not even to know what RNA is and attempt to discredit anyone who disagrees with their gurus in white labcoats (most of whom have financial ties to Big Pharma), despite the fact that plenty of published literature exists on the topic of vaccine harm. Advocates for forced universal vaccination appear to be unfazed by possible conflicts of interest and are not at all bothered by the sudden appearance of Bill Gates (whose company Microsoft violated anti-trust laws) in their social media timelines exhorting everyone everywhere to get on board with the global vaccination regime [sic].

Beyond all of the factors relevant to new vaccines more generally, one can quite reasonably inquire, in this case, whether anyone should trust a company (AstraZeneca) which “accidentally” (through a “manufacturing error”) gave thousands of its vaccine trial participants only half of their first dose, reported a 90% efficacy figure, but subsequently discovered that the true efficacy rate in those fully dosed was only about 62%. In other words, in the AstraZeneca trial in question, the less vaccine the subjects received, the better they fared. None of this is to suggest that anyone should expect laboratory technicians to be perfect, for they are human. But that is part of the gamble one takes in agreeing to participate in such a study, as can be seen throughout the history of vaccine development, which has left many bodies in its wake (mostly animals of other species, but also some human beings).

The reason why the healthy Western subjects of pharmaceutical drug trials have always been generously remunerated—in the third world they are not—is because they are risking their own well-being and even life by agreeing to ingest substances with unknown side effects, which cannot be predicted a priori. Indemnity clauses are always included in the contracts for those who agree to participate in experimental drug trials precisely in order to prevent any victims (or their survivors) from seeking compensation should something go awry. It is of course possible, and one certainly hopes, that the injection of foreign RNA into human bodies may not cause any lasting harm, but the unvarnished truth is that we simply do not know what the long-range and unforeseen consequences will be, because this has never been done before.

In all of the excitement over the splendid reported efficacy rates (90%, 94% and 95+%) of the front-runners in the great COVID-19 vaccine race, I have seen no mention by anyone of the survival outcomes of placebo subject classes. Why might that be? Whenever new drugs and remedies are scientifically tested, this is done with a contrast class of subjects who are given not the treatment being studied, but a placebo substance, which is considered to be inert vis-à-vis the disease to be defeated. This is the only way to demonstrate that the remedy is more helpful than doing nothing at all. In the case of COVID-19, there are a few key factors to bear in mind. First, based on the death charts of the Centers for Disease Control (CDC), the World Health Organization (WHO), and many other institutions as well, it is evident that any placebo remedy which I myself decide to take—water, vegetables, vitamin C, quinine, even air—already has a 99.5% chance of keeping me alive, even if I am exposed to and become infected with COVID-19. I may, therefore, stick with the placebo for the entirely rational reason that its efficacy rate in keeping me alive is likely just as high, if not higher, than that of any possible vaccine.

Big Pharma’s tactic of neglecting to report on the outcomes of placebo studies for its vast array of antidepressants and anxiety remedies was for years ignored. Eventually, a few courageous psychiatrists and psychologists revealed that, for many of the best-selling psych meds prescribed to millions of people all over the world, placebo subjects fared just as well and sometimes even better than those taking the drugs, particularly in long-range studies. In other words, many people prescribed psychotropes for acute cases of depression, anxiety and grief produced by life traumas such as the loss of a loved one would have improved over time, even if they had taken no drug at all. Mention of such results was routinely omitted from reports touting the efficacy of psychotropes for the plainly diaphanous reason that taking no medication does not produce any profit for drug manufacturers.

Similarly, the companies touting the virtues of their new vaccines designed to save humanity from COVID-19 make no mention of placebo class survival outcomes. Nonetheless, many people have been encouraged by the reported results, relieved that at last they will be able get back to living their lives as they please. In reality, the current misery of healthy individuals being victimized not by COVID-19 but by political policies crafted in response to the virus has no logical connection to the invention or success of any vaccine. Rolling up one’s sleeve cannot be made a condition upon ending policies which do not protect but rather harm most of humanity. Instead, the policies should be ended because they never had and never will have the advertised effects.

Remarkably, when anyone dares to express skepticism about the decrees of the new COVID-19 czars, this is taken to illustrate that they need to be protected from themselves and also from harming others. Somehow we have found ourselves in a world governed by Nurse Ratched-esque individuals who repeatedly scold us for the failure of their previous policies to put an end to COVID-19 and appear ready and willing to punish us further for not agreeing to do as they say and, now, to roll up our sleeves. They call it “treatment,” and they have already purchased, using taxpayer funds (what else?), “free” vaccines for all. From the perverse perspective of these government officials, it is our fault that the virus is running rampant, and, therefore, we must line up for our paper cup on the tray. If anyone objects to being made into the subject of an experimental vaccine trial, for any of the many non-conspiratorial reasons outlined above, they are to be denounced as lunatic fringe extremists and de-platformed across social media.

This frightening transformation of citizens into subjects is now so widespread that even some business leaders are promoting the same line, apparently believing themselves to comply with what they have been told over and over again are the dictates of science. The CEO of Qantas airlines recently announced that they will be requiring proof of COVID-19 vaccination for anyone attempting to board their flights. Needless to say, I will not be traveling to Australia again anytime soon, because my body is my own, and I do not agree to offer it up as a Petri dish in a large-scale clinical trial by any profit-driven company, and certainly not Big Pharma, whose amorality (at best) and manifest greed has already been firmly established through its many large-scale campaigns to drug everyone for anything—from infants to nonagenarians—with psychotropes. (Did you know that “Prozac” for dogs and cats is now a thing?)

It is precisely because of the unavoidable dangers involved that individuals, who alone will bear any negative consequences arising from their choices, must retain control over what is done to their own bodies. Yes, there are COVID-19 outliers as well: younger persons who suffer worse health outcomes than the vast majority of their peers, and it is possible that any given person will be an outlier in that sense. But there are already mountains of demographic statistics available on the dangers of COVID-19, while none whatsoever exist yet for the new vaccines. Free people must therefore decide for themselves whether the risks of taking an experimental antidote to a disease are outweighed by its alleged benefits. When authoritarian leaders and their associates in the corporate world paint themselves as benevolent, insisting that they are only trying to save the world from the dreaded disease, they are forgetting the most important quality of their constituents and customers: they are free to determine their own destinies and to assume risks which they themselves regard as rational and to reject those which they do not.

The United States Supreme Court recently upheld citizens’ first amendment rights of religion and assembly, even during a global pandemic, and one hopes that as lawsuits continue to wend their way up the judicial chain, the grip of authoritarian policymakers will be further diminished. Human beings should never be held hostage to the demands of those promoting universal vaccination, and least of all when their own danger of succumbing to the disease in question is small. If my own chances of dying from COVID-19 were 50%, rather than less than .5% then it might well be rational for me to gamble, just as many cancer victims, out of desperation, have agreed to submit to experimental treatments. But I am neither sick nor particularly vulnerable to the novel COVID-19 virus, so I’ll take my chances with my own immune system and my preferred placebo remedy of liberty. I may no longer be welcome in Australia, but there’s always Brazil. Or perhaps I’ll go to Mars.

Existentialism, Libertarianism, and the NAP

Existentialism, Libertarianism, and the NAP

I self-identify only as myself but have long been sympathetic with both libertarianism and existentialism. Having dealt throughout 2020 with an array of restrictions on my liberty imposed by local authorities everywhere I have been (Europe, the UK, and now in the US), the primary effects of which have been not to save lives but to control how people behave, I have been thinking about existentialism, which naturally raises questions about the proper scope and role of government, bringing me back, also, to libertarianism. Both outlooks prioritize human liberty, dignity and personal responsibility above all else. I have seen nearly nothing written about existentialism in recent years, perhaps because its most famous adherent in the twentieth century, Jean-Paul Sartre, was politically aligned with socialist and even communist movements. To suggest that existentialism and libertarianism are somehow related might seem prima facie odd because the latter is typically regarded as politically conservative, a right-wing, not a left-wing, view of the proper role of government. The mere mention of the word libertarian may incite ire among progressives of the “social justice warrior” stripe, and some leftists appear to derive untold delight from sardonically ridiculing libertarians as “pot-smoking Republicans”.

Another common stereotype is that libertarians must be white male land owners (why else would they care about protecting private property?!), which is of course just as simpleminded as Joe Biden’s claim that “You ain’t black!” if you have to think about whether to support him. In fact, nothing could be more racist than to assume that “authentic” black people have no real choice but to support the Democratic party. Biden’s claim was all the more disturbing given that he himself helped to author the 1994 crime bill which put thousands of people behind bars for nonviolent offenses, including many African Americans. Biden also rallied vigorously for the disastrous 2003 invasion of Iraq, which is relevant not only because a disproportionately high percentage of racial minorities serve in the military, but also because the lives of millions of persons of color were destroyed or degraded as a result of arguably the worst foreign policy blunder in U.S. history. In 2011, the Obama-Biden administration went on to offensively attack the country of Libya, which resulted in a resurgence of African slave markets. In that same year, they used lethal drones to execute brown-skinned U.S. citizens without indictment, much less trial. But who really cares about Biden’s policies? At least he is not Orange Man Bad!

Speaking of labels, Jean-Paul Sartre famously praised Che Guevara as “l’homme le plus complet de notre époque [the most complete human being of our age]” which, again, might lead some readers to scoff at my claim that existentialism and libertarianism have anything whatsoever in common. It would be a mistake, however, to confuse Sartre’s political views with the higher-order philosophical thesis of existentialism, which was most appealingly articulated by nineteenth-century thinkers Friedrich Nietzsche, Søren Kierkegaard and Fyodor Dostoevsky, who are not coincidentally some of my favorite authors. Albert Camus, another twentieth-century intellectual, wrote a number of works which arguably reflect an existentialist outlook—including his most famous novels, L’étranger [The Stranger] and La peste [The Plague]—but Camus himself resisted that label. He certainly wasn’t the first independent thinker throughout history to have refused to accept such labels, for a variety of different reasons. Some among them simply do not like club-like organizations, which do on occasion transmogrify into religious cults of sorts, even when their memberships comprise what to all appearances are intellectuals.

Jean-Paul Sartre followed the lead of his nineteenth-century predecessors in famously propounding that “l’existence précède l’essence,” which is an explicit rejection of the essentialism of ancient Greek thinkers such as Plato and Aristotle. We become what we do, but that is never fully determined by the circumstances of our birth. That said, it was not entirely insane for twentieth-century existentialists to champion left-wing political causes, so long as they were convinced (as they seem to have been) that the conditions for human liberty, dignity and personal responsibility were not available to the vast majority of persons. Sartre rejected not only Aristotle’s essentialism but also his belief (apparently common in ancient Greece) that women and non-Greeks (barbarians!) were not full-fledged persons. As pretty much everyone owns today, individuals denied the opportunity to educate themselves may appear to be illiterate, but that has nothing whatsoever to do with their inherent intellectual capacities. Along those lines, left-wing existentialists may insist that before anyone can make free choices, they need to have not only the potential but also the power, at least in principle, to do so. People who are scrounging around for their next meal or a roof over their head for the night may not have the energy or time to do much else.

As a result of the political activities and fame of Sartre and Camus, the existentialist waters were muddied for decades to follow, with some of those claiming Sartre as a personal hero more or less on a par with the twenty-somethings who wear Che Guevara t-shirts but never bother to read any books about him. Those who adore the iconic stenciled image of “Che”, and the implied “coolness” of anyone who agrees, might be stunned to learn, among other things, that Che Guevara personally oversaw the execution of more than 500 human beings, most of whom had been going along to get along with the Batista regime. Then again, given what might be termed “the authoritarian turn” taken in recent years by leftists keen to impose their values on everyone else, perhaps they would not be bothered in the least by Che’s homicidal creds.

The division between left-leaning and right-leaning existentialists turns most obviously on their interpretation of potential. Few would deny that it can be difficult for a person born into poverty to break out of his conditions, but it is nonetheless possible, as we know from the many people throughout history who have done just that. It is precisely the inherent dignity of human beings which drives some of them to achieve great things, and, although some will roll their eyes or snicker at this, one may with equal reason point out that many a person with a good deal of potential ended up squandering it in part as a result of the privileged conditions into which he was born. Ultimately, in a free society, the answer to the question what persons should do with their lives comes back to themselves, regardless of whether they were disadvantaged or spoiled, encouraged or oppressed.

The philosophical thesis of existentialism has no normative content—even morality is an undecided issue. Libertarianism, in contrast, champions what is sometimes characterized as the non-aggression principle (NAP) as its most fundamental tenet: initiating or threatening forceful interference with individuals and their property is wrong. In existentialism, everything is permitted. In libertarianism, in contrast, everything is permitted except violation of the NAP. Libertarianism, therefore, exemplifies moral absolutism, which existentialism does not. An existentialist may adopt non-aggression as a personal principle, and he may or may not exhort others to do the same. He may or may not find fault with those who neither agree with him nor follow his lead. The existentialist may skeptically regard the NAP as an article of faith, for it must be chosen by an individual himself for himself and for his own reasons. But to claim that normative principles such as NAP are articles of faith is not to deny their importance in how some people choose to shape their own lives.

What should we do? is not a question which can be settled by appeal to the deliverances of science, because science trades only in facts, while normative prescriptions for action are based in values, which cannot be read off of empirical reality. The paradox of morality is that you cannot argue someone into acting morally, if he does not already believe that he should, because what one ought to do can never be deduced from the way things happen to be. Instrumental rationality is a matter of fashioning means to ends, but setting those ends is up to individuals themselves—an idea championed not only by skeptics such as eighteenth-century Scottish philosopher David Hume, but also the existentialists.

The open-ended, contentless quality of existentialism is perhaps why much of what has been written by existentialists is literally literature—assuming the standard division between philosophy and literature. (I myself reject that division, but many philosophers do not.) However one distinguishes one type of writing from another, it is up to each person to decide how to interpret everything. If you choose to follow anyone else’s rules (those of your parents, teachers, the state, a religion or other group, a philosophical “school”), that is something which you choose to do—or not. “Ne pas choisir, c’est encore choisir [not to choose is still to choose],” as Sartre famously put it. Common criminals and protagonists such as Raskolnokov (in Dostoevsky’s Crime and Punishment) or Meursault (in Camus’ L’étranger) may be viewed by many as miscreants, but their comportment arises out of their individual decisions to adopt their own principles for living. They are free agents, and no one else is responsible for what they do. Yes, forces of nature and nurture act upon everyone, but we alone choose what to do and bear the primary credit or blame for the consequences which ensue.

Western democracy is generally regarded as the best available system for free persons, for it permits them to carve out their own destinies, based on their own beliefs. Everyone faces obstacles and struggles along the way, but with sufficient initiative, drive and ingenuity, some people manage to make something of themselves. The laws of modern societies prohibiting violence against other people effectively affirm the libertarian’s NAP (which is not however to deny that the state is itself the primary violator of the NAP, above all through war). An individual may lead his life as he wishes, provided that he does not prevent others from doing the same. If your concept of “The Good Life” requires the destruction of other human beings and/or their property, then your liberty will be restricted by the government, if you are caught. Some people do not embrace the NAP, choose to rape and murder, pillage and plunder, and some among them end up in prison next to the nonviolent pot-smokers and others locked up as a result of the 1994 crime bill and related NAP-hostile legislation.

Now that recreational marijuana has been legalized in many of the United States, and medical marijuana in even more, there are plenty of pot smokers roaming free, even while others continue to languish behind bars. We also know that, although some murderers are locked up, others remain at large: one out of every three homicide cases in the United States is never solved. That may seem to be an alarming statistic to some, but it is the price that must be paid for the much worse alternative of judging everyone guilty until proven innocent. The presumption of innocence protects many more innocent than guilty people. No one should be locked up (much less executed) for their mere potential to commit crimes, and anyone who thinks otherwise is a tyrant, tout court. Some of the best works of dystopic fiction underscore the horror of a world in which everyone is constantly under suspicion and subject to arbitrary detention for whatever reason any authority may deem sufficient, solely at his caprice.

In 2020, people are currently being denied the freedom needed to determine their own destinies and to conduct themselves with the dignity which distinguishes them from the members of other species. In this way, COVID-World offers libertarians a glimpse into the twentieth-century existentialists’ concerns about the material prerequisites which must first be satisfied in order for persons to be able to choose what to do with their lives. Before COVID-19, people in Western liberal societies were largely held responsible for their own deficiencies and failure to fashion a good life for themselves. Now, however, people are being denied the opportunity to do what they would choose to do, left to their own devices. Effectively, those being prevented from earning a livelihood and forced to stay home are the equivalent of innocent persons erroneously convicted and sentenced to prison terms. Incarcerated persons are severely hampered in their ability to start and run businesses, and to act in other ways which might prevent them from resorting to crime in the future. They are also strictly limited in their choices of how best to flourish and thrive while inhabiting a cage.

Just as innocent persons should not be incarcerated, healthy people should not be quarantined. From the perspective of both existentialism and libertarianism, this arbitrary detention of innocent persons can be viewed as an affront to humanity. People are being told how they must live by their government, which claims to be acting for the public good but in reality is destroying countless lives. It is not the case that persons are forbidden by the government only from harming other people and their property, as an NAP-based society would prescribe. Citizens are in fact being ordered, effectively, to harm themselves, under the pretext that acting in ordinary ways may lead to the deaths of other people. How so many compliant citizens have come enthusiastically to embrace this Orwellian Covidystopia as “the new normal” is beyond me. Perhaps it is simply the logical consequence of stringent behavioral conditioning initially implemented by appeal to what we now know to have been the false claim that millions of compatriots would otherwise die. Many months later, having already accepted the endless and mercurial decrees of the Covid czars, people still terrified of the virus are willing to do whatever they are told to do without posing any objections whatsoever. Nine months of habits die hard, so when gurus in white lab coats such as Anthony Fauci tell them to jump, they answer “How high?”

Governments allegedly of, by, and for the people have imposed many restrictions on liberty in countries all over the planet, the primary effects of which have been to harm millions of people in the name of the small percentage of those who are vulnerable to COVID-19. It may be tempting to ascribe underhanded or ulterior motives to those who wave their science flags in defense of the new Nurse Ratched state, but there is no real need to do so, for the phenomenon can be more simply explained as fully analogous to the enthusiastic drum-beaters for wars from which they themselves have nothing to gain and, indeed, much to lose. The problem at this point in time is that people reside on one or the other side of the COVID-19 divide, but the policymakers are for the most part aligned, claiming the authority to dictate behaviors for all of society by appeal to the opinions of a few select scientific experts, no matter how many times they have been wrong in the past. Recall that Anthony Fauci sincerely proclaimed in a 60 Minutes program interview that masks were not necessary, and in fact caused more problems than they prevented because people wearing them tend touch their faces more often than they might otherwise do. (And of course it is quite evident by now to any observant person that most people wear the same mask over and over again—pulling it out and putting it into the same pocket or purse, making the exercise purely a matter of show.) We were also told “fifteen days to flatten the curve,” but then the goalposts were changed again and again, until now, nine months later, Pennsylvanians have been ordered to wear masks whenever they leave their home and also within their residence, if anyone should happen to visit. Travel continues to be restricted and has been condemned by government authorities the world over, both at the national and state level, despite the IATA’s (International Air Transport Association’s) calculation that the chances of contracting COVID-19 on a plane this year were one in twenty-seven million. Although some disputed that claim, the U.S. government abandoned its own health screening of persons on incoming flights because the positive cases were so low that the program was deemed cost ineffective.

Citizens stepped onto a slippery slope when, back in March 2020, they agreed to stay home, and, if necessary, not to work. They agreed to wear masks wherever and whenever this was deemed necessary by the authorities that be. But one restriction and rule leads to another, with progressively more absurd implicatios, as is nowhere better illustrated than in the State of Pennsylvania’s requirement that people wear facemasks within their own homes. Who will be enforcing such laws? (Perhaps Amazon’s Alexa can be brought on board, given that she already resides in millions of homes.) This invasion of policymakers into the private lives of their constituents, and the fact that people have not risen up in response, is a dangerous turn in the already surreal series of events constitutive of the COVIDystopic year 2020, and it must be resisted, while it is still possible to do so. Beyond prohibiting domestic violence (which is one instance of enforcing the NAP), the state has no business whatsoever in any private residence. It is not the government’s business to tell human beings how they ought to live or who they should be. People need to take personal responsibility for their own health and well-being. No one denies anyone the right to choose not to smoke or to drink alcohol and eat fatty foods, and no one is preventing anyone afraid of the virus from donning hazmat suits. As for the rest of us, we should be permitted to shoulder the inevitable risks associated with leading what we freely choose to make of our own lives.

Pin It on Pinterest