“I was the CIA director. We lied, we cheated, we stole…We had entire training courses…”- Former CIA Director and U.S. Secretary of State Mike Pompeo
The concept of whistleblowing seems simple on its face: a government employee recognizes that crimes are being committed by the agency they work for and reports them so that the public is made aware that their tax dollars are being misused by those in charge. But what happens when large parts of the operations of an entire agency (such as the CIA), or even the entire executive branch of government, are grounded in immoral behavior? Is it possible to effectively “blow the whistle” on the entire sprawling apparatus? Or is that conceptually impossible, given that the government reserves for itself the prerogative to issue judgments on what it does?
A system of checks and balances in the United States is said by its defenders to be preserved through having three separate and independent branches of government: the executive, the legislative, and the judicial. If a governor enacts a law which some citizens believe violates the Constitution, then they can sue the governor, and a judge will issue a verdict on the matter. If the citizens do not win the suit, they can still appeal the ruling by taking the case up again in a higher court. Eventually, the most contentious disputes end up at the Supreme Court, where debate comes to an end and a final judgment is made. This system allows some degree of checks and balances on lower court judges whose political biases may impede objective assessment of the matter in question. Of course, Supreme Court justices, too, are human beings who have been appointed by politicians, so bias cannot be altogether eliminated. But because the nine Supreme Court justices are never appointed by the same president, there is some hope that perspectives will be balanced, and at the very least, dissenting justices in the minority are able to articulate the grounds for their dissent, so that if ever the matter reaches the high court again, it can be informed by those concerns.
Legality and morality, however, are two different things, as much as we may wish for the former to reflect the latter. Expressing moral dissent from what is being done by the government becomes far more difficult when the perpetrators are granted by law the ability to commit moral atrocities, as in war. During wartime, the standards of civil society are completely flouted, permitting the premeditated, intentional killing of human beings, even of soldiers who have been coerced to fight, and even of completely innocent civilians, provided only that the perpetrators claim to have good intentions. Antiwar activists continue to question the “just war paradigm” presupposed by modern military practice, but the complete abolition of war remains a lofty ideal, given the less lofty financial dynamics propelling the war machine forward.
As difficult as it is to take issue with war itself, many veterans throughout history have done just that, having once witnessed firsthand the stark disparity between wartime rhetoric and reality. It is even more difficult to criticize government killers when they operate under a cloak of secrecy. If final judgments regarding alleged wrongdoers are exclusive to the very institution or branch being criticized, then the system becomes microcosmic of tyranny, for the head of the institution effectively writes the “laws” for his subordinates. One example of this was the military’s own assessment of events captured in the video footage Collateral Murder, which was shared by Private Bradley (now Chelsea) Manning through Wikileaks. In that harrowing film, Reuters journalists are killed by U.S. soldiers hovering above them in a helicopter. After assessing the episode in response to public outrage over what transpired in New Baghdad, Iraq, on that day, the Pentagon concluded that the soldiers had in fact acted in accordance with military protocol. Manning did the public a great service by revealing a shocking truth about what the U.S. military regards as acceptable behavior.
Notwithstanding the oft-recited claim that the United States is a pillar of democracy, there are sizable portions of the U.S. government which are beyond the reach of effective criticism for the simple reason that they are shrouded in secrecy on grounds of national defense. Notably, the Central Intelligence Agency (CIA) and the part of the Pentagon’s wide-ranging initiatives paid for using the ever-expanding Black Budget operate with effective impunity. The pretext of “national self defense” is used to prosecute and sometimes destroy the lives of those who dare to demur from the sorts of immoral activities undertaken by government entities allegedly for the good of the nation. In many cases, what are really being defended are the agencies themselves and the comportment of their compliant employees.
The reason given for the assiduous pursuit of whistleblowers in such cases is identical to the alleged reason for the secrecy: that to reveal anything about what is underway is to compromise national security. When whistleblowers expose what look to be war crimes, they are said by prosecutors specifically to endanger the lives of those implicated in the allegations—both regular government employees and cultivated sources. Indeed, the mere possibility of endangering the perpetrators—even when there is no evidence of any harm done by the leaks—usually suffices for the prosecutors of whistleblowers to win their cases. The irony, of course, is that the reason why the revelation of unsavory government activities to the public is dangerous to the perpetrators is manifestly that the acts in question are immoral and will be denounced by most any right-minded person who is made aware of them.
Before the twenty-first century, crimes such as the assassination of suspected enemy spies were committed both by the CIA and by the Pentagon. There was very little, if any, congressional oversight over secretive assassination programs throughout the Cold War, and the small number of congresspersons privy to the details evidently agreed with what was going on. Nonetheless, in the 1970s, the Church Committee and the Pike Committee did manage eventually to rein in the CIA and the Pentagon run amok during the Vietnam War, when hatred and fear of communism led government administrators to devise morally dubious programs such as Phoenix, which resulted in widespread civilian carnage. A moratorium was put on assassination by President Ford in 1976 through Executive Order 11905, and it seemed that the self-correcting system had worked to some degree—albeit not in time to save the lives of thousands of human beings.
During the Global War on Terror waged in response to the events of September 11, 2001, the moratorium on assassination came to an end. Far from being regarded as taboo, such killing was normalized and came eventually to be openly acknowledged and even vaunted to the public. In effect, the intentional, premeditated execution of specific individuals—formerly known as assassination and considered illegal under international law—was rebranded as targeted killing and embraced as a new standard operating procedure of what was enthusiastically billed as smart war. By using remotely piloted aircraft (RPAs), or lethal drones, it would no longer be necessary to risk troops’ lives, which was naturally good news to politicians who promoted the drone program without thinking about the consequences for the people on the ground, nor the global effects later on down the line, when other governments began to deploy lethal drones in achieving their aims.
“Taking the battle to the enemy” was deemed necessary for national self-defense, and now, the marketing line went, it could be done without sacrificing any U.S. soldiers’ lives. There might be a bit of so-called collateral damage here and there, but as usual it would come to be ignored or brushed aside as one of the inevitable consequences of “the fog of war.” By portraying targeted killing as a rational way of minimizing combat losses, the whole notion of what counts as permissible warfare was transformed, seemingly irrevocably, given that the United States and Israel set the precedent. Over the course of the war on terror, thousands of people have been killed, and many times more maimed and terrorized, using missiles launched from drones hovering above Yemen, Syria, in Northern Africa, and beyond, in programs administered by the CIA rather than the military. The fact that drone strikes “outside areas of active hostility” (where no U.S. troops were on the ground to protect) were made the province of the CIA to initiate and supervise was diaphanously intended to evade meddling congressional attempts at oversight.
By now, lethal drones have been purchased by governments all over the world, who are free to use these weapons to dispatch anyone whom they designate as the enemy, no matter where they may be. First in line among non-U.S. heads of state to emulate President Obama in intentionally hunting down and assassinating citizens located abroad was U.K. Prime Minister David Cameron. Cameron used lethal drones in 2015 to execute British citizens located in Syria, despite the fact that capital punishment is forbidden by U.K. law. Had Cameron indicted and tried his victims in Britain, then they would likely still be alive, even if found guilty in a court of law. Rather than prosecute his targets as citizens entitled to due process, Cameron waited until they traveled to Syria before extrajudicially assassinating them.
Where formerly it was considered illegal (under the Geneva Conventions) for a soldier to execute an unarmed enemy soldier point blank, in the Drone Age it is said to be perfectly permissible for an operator located thousands of miles away from a “battlefield” devoid of allied soldiers to push a button and eliminate a suspected enemy combatant, along with anyone who happens to be around him at the time. Still reeling from the shock of what transpired on September 11, 2001, politicians and the populace alike were essentially tricked into believing the manifest absurdity that a soldier located in a trailer in Nevada could be said to kill in self-defense a person who not only was not provided with the right to surrender nor prove that he was not a terrorist, but in fact was not even armed. Understandably, some of the enlisted soldiers lured into working in the U.S. government’s interagency drone program have deeply regretted their participation and abandoned the profession.
A few of the persons privy to the details of the drone program have spoken out, including Daniel Hale, a former signals intelligence analyst who was recently sentenced to nearly four years in prison for violating the Espionage Act. Hale, who worked in the drone assassination program in Afghanistan, stole and shared a trove of top-secret documents, which were published online by Jeremy Scahill of The Intercept as “The Drone Papers,” and in book form as The Assassination Complex. The documents confirmed what other apostate drone operators had already claimed: that the U. S. government, far from achieving “near certainty” about their targets before dispatching them with missiles launched from drones, in fact defined all victims of drone strikes as Enemy Killed in Action or EKIA, provided only that they were military-aged males. Instead of needing to prove guilt beyond a reasonable doubt before executing these suspects, the entire program has been based on the preposterous premise that such persons are guilty until proven innocent. Under this assumption, lengthy hit lists have been drawn up for years by analysts using circumstantial evidence such as SIM card data from cellphones of suspected terrorists. No matter that some of the military-aged males incinerated by drones may have worked as taxi drivers, delivery persons, etc. If their number was derived during a data sweep from the phone of a person already suspected of having terrorist organization connections, then they became “guilty until proven innocent” by transitivity. In “crowd killing,” entire groups of men of unknown identity have been eliminated under the assumption of guilt by association.
The atrociousness of this inversion of justice, which took place during Barack Obama’s presidency and resulted in the deaths of thousands of unnamed persons of color, is difficult to exaggerate. The fact that Hale, in an eleven-page handwritten letter which he sent to the judge presiding over his trial, has explicitly taken issue with President Obama’s public statements on the drone program makes it seem very unlikely that President Biden will pardon the whistleblower—although I certainly hope that I am wrong about this. The problem in this case is that to pardon Daniel Hale would be to acknowledge that either President Obama lied to the public when he said that drone strikes were only carried out when there was “near certainty” that no civilians would be harmed, or else he was incompetent, having no idea what was being done by his drone program czar, John Brennan, and those whom he supervised. A classic Charybdis and Scylla.
Daniel Hale is obviously not a spy, for his avowed intention in disclosing top secret documents was only to reveal to the public what was going on in the drone program under a bogus pretext of national self defense. What he revealed is not that a few rogue operators have killed innocent people with impunity, but that the entire drone program is premised on the assumption that it is perfectly acceptable to execute anyone anywhere on the basis of purely circumstantial evidence. In addition to cellphone SIM card data, drone video footage and the testimony of bribed informants on the ground are also used to add names to hit lists. But in the third world countries where these drone strikes have been carried out, the destitute locals who provide HUMINT, or human intelligence, obviously have compelling financial incentives to locate targets for the people paying them.
To acknowledge that Daniel Hale was right to act on his conscience and reveal to the populace that they were being lied to is simultaneously to assert that the entire drone program is fundamentally misguided, indeed just as wrong as murder—because that is what it is. Hale correctly recognized the financial incentives driving the employees of the “killing machine,” with private military companies (PMCs) rewarded for identifying as many “terrorists” (in reality, suspects), as possible. All of the employees involved in the hunting down and killing of suspects using lethal drones outside areas of active hostilities have been embroiled in a taxpayer funded large-scale program of mass murder. Yet this killing has been rendered banal to most citizens, in large part because the mainstream media outlets decline to discuss the matter at all, deferring as they always do to the Pentagon under, again, a pretext of national self-defense.
Daniel Hale has been sentenced to nearly four years of prison for documenting what other drone operators had spoken out about: that military-aged males killed by missiles launched by drones were assumed guilty until proven innocent. Under President Obama, thousands of unnamed suspects were killed outside areas of active hostilities–in other words, in places where there were no U.S. soldiers on the ground to protect. Trump naturally continued the drone program, and now the Biden administration is following suit.
National Bird: A Cautionary Tale
Immanuel Kant, an eighteenth-century German philosopher, famously espoused the following maxim of morality:
Act in such a way that you treat humanity…never merely as a means to an end, but always at the same time as an end.
The terms of this principle, a formulation of what he calls “The Categorical Imperative,” are rather abstract, but Kant also provided a more practical test for determining whether a prospective action is morally permissible or not:
Act only according to that maxim whereby you can at the same time will that it should become a universal law.
According to Kant, violations of this formulation of the Categorical Imperative embroil one in a “practical contradiction.” It is not immediately obvious what he means by this, which is why his oeuvre continues to be a lively subject of debate among professional philosophers. Those sympathetic with Kant’s general outlook have sometimes drawn parallels to more familiar principles of the major religions, including the Golden Rule:
Do unto others as you would have them do unto you.
Even without having studied philosophical ethics, many people will nonetheless aver that when we talk colloquially about someone using another person, the implication is that it is immoral. Excellent examples include notorious “black widows” (and widowers), who murder their spouses in order to gain possession of their wealth. In fact, every case of mercenarily motivated murder would seem likewise to violate Kant’s Categorical Imperative—and The Golden Rule. The idea of not using people solely as the means to our selfish ends coheres rather well with commonsense morality and is embedded in the legal systems of modern western democracies.
Much ink has been spilled over the past few centuries by some scholars in rejecting Kant’s deontological theory in favor of more practical, teleological or consequentialist, approaches such as utilitarianism, according to which one should always act so as to maximize the happiness or pleasure (or “utility,” as John Stuart Mill and Jeremy Bentham termed it) of the greatest number of people. According to utilitarianism, no action is excluded from the outset, because one must determine what its consequences will be in order to know whether it is right or wrong. If a black widower donates his miserly wife’s estate to help people in dire need, then a strict utilitarian might in fact deem the murder (intentional, premeditated act of homicide) to be the right course of action. More generally, if by sacrificing one person or a small number of persons one will thereby save millions of morally equivalent others, staunch utilitarians will insist that the sacrifice not only can but should be made.
Quasi-utilitarian reasoning is found frequently among calls for wars of so-called humanitarian intervention, which promoters claim will save many more people than doing nothing, even though there will invariably be some “collateral damage” victims who die as a direct result of the bombing itself. The outcomes of modern bombing campaigns never reflect the sunny forecast of those who set the intervention machine in motion, but even if they did, this rationalization for “humanitarian intervention” assumes that killing and letting die are morally equivalent, a position which is rejected within the bounds of civil society. Except in rare cases, involving persons with special obligations of care, such as physicians and parents, we do not regard permitting people to die as the moral equivalent of killing them.
The ongoing mess in the Middle East shows how wrong the prognosticators were when they claimed that the invasion of Iraq in 2003 would be swift and simple, ushering in an era of peace and democracy for Iraqis, who instead went from suffering under the rule of a despot to living in a chaotic and deadly environment in which their security and quality of life were severely degraded. The state of Libya a decade after the 2011 bombing campaign and the ouster of Muammar Gaddafi is another striking example of how wrong interventionists can be about the consequences of their “well-intended” programs of homicide.
One reason why hawks reach so facilely for utilitarian rationalizations for their wars may be that in this approach to normative morality there is no need to reflect seriously upon the plight of individual soldiers. The end justifies the means and, yes, that will include the sacrifice of some young persons in the prime of their life. In galvanizing support for invading and bombing other countries, the effects on soldiers—the thousands who may be physically maimed or killed, and the many thousands more who may be psychologically wrecked by the experience—are not mentioned at all because they are not recognized as real until after the fact, and then only by some. Indeed, the U.S. military itself has repeatedly and systematically denied responsibility for injuries to soldiers—caused by the spraying of Agent Orange in Vietnam, the bombing of chemical facilities in the First Gulf War, the use of burn pits during the occupations, etc.—even in the face of overwhelming evidence that the soldiers were harmed not by the enemy but as a direct result of their own military leadership’s callous disregard for the well-being of troops.
Utilitarian-esque reasoning is quite versatile and is readily invoked in debates on a variety of other military matters as well. Opposition to military conscription, for example, can be made on the purely utilitarian grounds that coerced soldiers are unlikely to fight as effectively as volunteers. Accordingly, whenever soldiers are forced to fight, rather than invited to do so, the outcomes will likely be worse than they would otherwise have been. In World War I, this problem was “solved” by sending wave after wave of young men to their deaths, effectively expending them as cannon fodder.
The Kantian reason for opposing military conscription, whereby unwilling persons are coerced to fight, kill and possibly die in wars over which they have no say, differs markedly from the utilitarian perspective. Efficacy, far from being morally decisive, is in fact irrelevant in the Kantian moral framework. What is wrong with conscription is not that it will have negative consequences but that such soldiers are treated merely as the means to the ends of political elites. Alongside draftees, many a volunteer soldier has been squandered as cannon fodder, but so long as he freely entered into the Faustian bargain of agreeing to kill and risking his own life in exchange for employment, benefits, etc., then he is not being used in the same sense in which every drafted soldier is.
Now, there are good reasons for thinking that war as a means to conflict resolution is at least irrational, if not intrinsically immoral, because no one should ever agree to kill complete strangers at the behest of war promoters, many of whom stand to profit from war—whether financially or politically, and often both. But as a result in part of the long-entrenched myth of heroic warriors who take up arms everywhere and only in the name of “justice”—so long as they are on our side!—wars do continue to be waged and fought, victims slain, and soldiers sacrificed. Relative to that world, delusional though it may be, forcing persons to take up arms is still worse than allowing them to do so.
As shocking as it may seem, twentieth-century soldiers were experimented on in a variety of contexts, under what appears to have been the assumption that they had already signed their lives over to the military, so why not? During the 1991 Gulf War and in the following years, U.S. soldiers were required to be vaccinated against Anthrax using a yet-to-be-approved (by the FDA) pharmaceutical product which caused significant bodily harm to some of the troops. As a result of the Anthrax vaccine fiasco, soldiers are no longer required to undergo experimental treatments, including the emergency authorized COVID-19 vaccines, which have yet to receive full FDA approval. Needless to say, the pharmaceutical and biotech companies involved are doing everything within their means to obtain an early approval so that the vaccines can be mandated by law in a variety of contexts, including the military.
More generally, the current COVID-19 crisis provides a refractive lens through which to distinguish the two very different ways of conceiving of morality, the deontological (as exemplified by Kantianism) and the teleological (as exemplified by utilitarianism). Human experimentation, such as the mass vaccination programs currently underway, is carried out under the utilitarian assumption that the sacrifice of a few will ultimately save millions of lives. Every medical treatment, even those which have received years of testing and full FDA approval, has negative outlier effects on a small portion of the population, and it is purely a matter of misfortune to be one of the persons who ends up being harmed rather than helped. No one has been singled out for harm, so the situation is similar to a lottery where most people win the prize—in this case immunity or, at the very least, better prospects for survival in the case of infection—but a small percentage do not.
The Vaccine Adverse Effect Reporting System (VAERS) dababase catalogues the reported harms caused by vaccines, and in the case of Covid-19, these have included myocarditis, severe allergic or immune reactions, and Bell’s Palsy, among other possible effects, up to and including death. That these vaccines are being distributed in an ongoing experimental trial is underscored by the fact that the specification sheets for recipients and caregivers were recently updated to reflect the incidence of heart disease as a rare but possible side effect. That risk was not recognized in the early, much smaller, trials, nor in the initial roll-out to elderly persons, but became clear only when younger persons began to be vaccinated, who would ordinarily not have heart troubles, as older persons sometimes do.
So long as patients are properly informed of the potential dangers, if ever so slim, to their health and well-being, then it is their prerogative to incur risks in exchange for the prospective benefits of vaccination, should they deem this to be the proper course of action for themselves. In other words, the case may be viewed as similar to a fully informed person who agrees to enlist in the military, even while knowing the risks involved. There are, however, some curious factors in the present case which together suggest that nothing like morality is driving the quest for universal vaccination. Most obviously, a heavy-handed and ubiquitous propaganda campaign is being used to persuade persons to believe that it is somehow wrongheaded, ignorant and/or selfish not to agree to serve as a subject in an experimental trial of a treatment for which many of them have no need, given their prospects for survival even without the vaccine.
Under normal circumstances, individual persons, so long as they are mentally competent, are deemed the appropriate authorities about which treatments to undertake in efforts to protect themselves and enjoy good health—or not. Free people are also permitted to smoke, eat junk food, avoid exercise, consume alcohol as they please, and engage in risk-taking activities such as rock-climbing at their caprice, even though each of these behaviors may result in premature death. In the current crisis, we have seen endless exhortations to universal vaccination from figureheads such as President Biden and Vice President Harris, both of whom recently emoted on Twitter: “Get vaccinated, or wear a mask until you do!” Such sweeping prescriptions on the part of persons with no information about the individual patients whom they are sternly enjoining to undergo treatment would be a clear violation of medical ethics, if in fact Biden and Harris were physicians, which of course they are not.
Competent medical professionals do not issue blanket prescriptions to be followed uniformly and mindlessly by all possible patients. The particular circumstances of particular patients call for particular treatments to be undertaken—or not. Sound medical advice derives from a licensed professional who is familiar with the condition and circumstances of the patient in question. There is no prescription applicable simultaneously to infants, toddlers, adolescents, young adults, pregnant women, middle-aged persons, and nonagenarians, because their bodily conditions are completely different. Moreover, even within each partitioned category, a wide range of variation exists. Some people (whatever their age) are obese, while others are not. Some people have smoked or continue to smoke, while others do not. Some persons suffer allergies, while others do not. It is nothing short of incompetent to suggest that any treatment should be applied in a one-size-fits-all fashion, as is being done in the propaganda campaigns for the COVID-19 vaccines. Far worse than offering people incompetent (because ill-informed) medical advice, however, would be to force them to comply with mandatory edicts derived from incompetent medical advice.
An overzealous judge (Richard Frye) in Ohio recently sentenced three persons convicted in his court of law to COVID-19 vaccination, which would seem to be a flagrant violation of civil rights. Certainly the punishment cannot possibly be said to fit the crime, because it is completely irrelevant to it—to any crime, as a matter of fact. The judge explained his decision in the following terms, “It occurred to me that some of these folks needed to be encouraged not to procrastinate,” demonstrating only that he has no business residing over any court of law, for he has decided to use the courtroom as his personal pulpit, legislating from the bench in the most obnoxious of ways. One of the criminals, Sylvaun Latham, was offered the choice of COVID-19 vaccination plus a one-year term of probation or else a five-year term of probation. In other words, his liberty to conduct himself as he pleases was tethered by the judge to his willingness to serve now as a subject in an ongoing experimental trial of the COVID-19 vaccine, which is not scheduled to end until 2023.
To require convicts to serve as subjects in experimental trials for drug treatments for which they may or may not have any need (see The Imitation Game for the tragic story of Alan Turing in Britain) is tantamount to making them the property of the state and their lives the prerogative of the state to risk and even sacrifice. This is a very different scenario from voluntary conscription, whereby fully informed persons agree in exchange for remuneration to risk their own lives and well-being. But soldiers who volunteer to fight for their country do not simultaneously agree to serve as pharmaceutical company guinea pigs, which is why forced experimentation on soldiers, too, is wrong. As difficult as it may be to believe, we have now entered an era in which so-called public health experts who support mandatory vaccination are galvanizing judges to conduct themselves in the manner of the officials of the Third Reich. During that deplorable episode of history, judges regularly sentenced persons to sterilization, and many persons were used in human experimentation against their own will.
The most important conclusion of the Nuremberg court regarding human experimentation was this:
The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision.
Extorting convicts to undergo experimental vaccination in exchange for shorter prison or probation sentences clearly violates this Nuremberg court finding. Indeed, every case of “force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion” to undergo medical treatment is also a violation.
Going even farther than the Ohio judge who imposed vaccine sentences upon convicted criminals, the government of France is effectively criminalizing those who refuse to participate in the vaccine trials. On July 12, 2021 (ironically two days before Bastille Day), President Emmanuel Macron announced that proof of vaccination will be required in social venues, on public transport, and in some cases to remain gainfully employed. By denying persons the right to use public transportation, or even to work, the French government is especially targeting poor people, for wealthy people have private cars and do not need to work. But all “non-compliant” French citizens are being punished as though they committed crimes when in fact they have every right in the world to decide which medical treatments to undergo and which to decline. These measures effectively transform French society into an everted prison in which everyone who refuses to offer his body for use in an experimental trial has his liberties curtailed as though he were an incarcerated criminal who has been convicted of a crime. In effect, everyone who declines the experimental vaccine is being put under house arrest.
In the United States, some businesses are requiring vaccination of their employees, and quite a few universities are requiring vaccination of both employees and students, even though the chance of deleterious, life-changing or even deadly, vaccine side effects may for some cohorts (such as young males) be greater than the chance of death should they become infected with COVID-19. It is nothing short of extortion to threaten people with extremely negative consequences should they not volunteer to serve in an experimental trial for a drug/device of which they have no need. You want to finish your college degree? You want to remain gainfully employed? Then roll up your sleeve! And yet a disturbing number of otherwise apparently rational people support these initiatives, at least judging by their comportment on social media. (Note that there are many bot farms operating on this front as well, and whether they are being paid for by governments or the companies who stand to profit is unclear.)
On July 6, 2021, President Biden announced his administration’s intention to send vaccine promoters door-to-door to persuade those who have not already complied to change their mind. The assumption behind this “folksy” approach of “community outreach” is that anyone who declines vaccination is ill-informed, and with the appropriate amount of friendly banter they will recognize the error in their ways. The problem, however, is that, pace Anthony Fauci, “The Science” has not spoken yet. Information censored and dismissed as disinformation by the media and those who parrot its every proclamation includes hypotheses, theories and bald facts which do not support the reigning narrative and suggest that it may well be false. While appealing to a “community outreach” spirit, Biden also likened this initiative to a “war-time” effort and called willingness to be vaccinated “patriotic”, the insinuation being that declining vaccination is unpatriotic.
Preposterously, given the thousands of breakthrough cases of persons fully vaccinated but who contract COVID-19 anyway, the so-called vaccines may not effectively prevent transmission but only mitigate symptoms—which is what they were designed to do. The shots offer a very slim risk reduction (ARR or absolute risk reduction of ~1%) to most people, because most people are not vulnerable to COVID-19, making it far from obvious that there is any reason for them to undergo an experimental treatment. Yet facts appear incapable of slowing the propaganda machine set in motion more than a year ago, and vaccine proselytzers persist in haranguing even the millions of already recovered persons to roll up their sleeves.
The global propaganda campaign has been so relentless and vast that those who decline vaccination, as in France, stand to have their liberties severely curtailed by government bureaucrats the world over who cling tenaciously to disinformation about the supposed superiority of vaccine immunity over natural immunity, despite numerous studies demonstrating the robustness of the latter and mountains of evidence that social mitigation measures have no effect on outcomes from place to place. Strikingly, if the vaccines do not prevent infection but only diminish symptoms, then millions of vaccinated persons should be expected to fall into the supposed class of “asymptomatic carriers” and considered more likely to transmit the virus to other people once they stop wearing masks and practicing social distancing—at least according to the religious tenets of the Branch Covidians.
As in every other case when quasi-utilitarian rationalizations have been trotted out in support of policies which will destroy some persons’ lives, no one has any idea what the longer term effects of the vaccination programs will be. To pretend otherwise is to lie and, in Kant’s view, to deceive and thereby treat the persons in question merely as means, not as ends in themselves. To treat people as moral persons is to grant them the dignity of being able to inform themselves, assess the facts, and come to their own conclusions about how best to conduct their own lives, up to and including which medical procedures to undertake. Anyone who agrees that it is wrong to use people solely as a means should be wary of pseudo-utilitarian propaganda, above all when the self-styled utilitarians have nothing to lose and something to gain. That there exist today people who are rallying for forced vaccination by the government of the very people whom the government supposedly serves reveals, once again, as many historical episodes attest, how frightened people can be persuaded to support objectively abhorrent policies, sacrifice their fellow human beings and even renounce their very own rights.
When Biden’s Pfizer minions show up at your doorstep, let us hope that they do not in their missionary fervor undertake to vaccinate you without your consent. Just as the cases of President Macron and Judge Frye illustrate, for fanatics convinced of their intellectual superiority and moral righteousness, the end always justifies the means. The danger of this political climate for free people cannot be overestimated. Given the length and range of the COVID-19 vaccine propaganda campaigns, which have completely saturated the mainstream media, there is some reason for suspecting that readily available forms of forced vaccination may be nearer than we think, given the willingness of state authorities such as judges and presidents to criminalize the refusal to serve as a subject in a pharmaceutical product trial.
That Biden has claimed to be on a “wartime” footing vis-à-vis COVID-19, and the Pentagon itself recently held a “war game” specifically addressing the COVID-19 crisis certainly does not bode well for the future of free people. The technology already exists to be able to vaccinate the unwilling using aerosol sprays which could be delivered by automated drone swarms. As horrifyingly dystopic as that possibility may sound, we already know from their many military misadventures abroad that government officials are ready and willing to use any and all of the implements in their arsenal in achieving their aims, and they have no problem ignoring altogether the moral personhood of their victims.
Over the course of the past century, a number of truly awe-inspiring heists have been carried out by con artists, whose modus operandi is to exploit human frailties such as credulity, insecurity and greed. Con is short for confidence, for the con artist must first gain the trust of his targets, after which he persuades them to hand their money over to him. A con job differs from a moral transaction between two willing, fully informed trading partners because one of the partners is deceived, and deception constitutes a form of coercion. In other words, the person being swindled is not really free. If he knew what was really going on, he would never agree to invest in the scheme.
The “Ponzi scheme” was named after Charles Ponzi, who in the 1920s persuaded investors to believe that he was generating impressive profits by buying international reply coupons (IRCs) at low prices abroad and redeeming them in the United States at higher rates, the fluctuating currency market being the secret to his seemingly savvy success. In reality, Ponzi used his low-level investors’ money to pay off earlier investors, support himself, and expand his business by luring more and more investors in. More recently, Bernie Madoff managed to abscond with billions of dollars by posing as an investment genius who could deliver sizable, indeed exceptional, returns on his clients’ investments.
It is plausible that at least some of the early investors in such gambits, who are paid as promised, suppress whatever doubts may creep up in their minds as they bask in the splendor of their newfound wealth. But even those who begin consciously to grasp what is going on may turn a blind eye as the scheme grows to engulf investors who will be fleeced, having been persuaded to participate not only by the smooth-talking con artist, but also by the reported profits of previous investors. Eventually, however, the house of cards collapses, revealing the incredible but undeniable truth: there never were any investments at all. No trading ever took place, and all of the company’s transactions were either deposits or withdrawals of gullible investors’ cash.
Before a con artist is unmasked, nearly everyone involved plays along, either because they stand to gain, or because they truly believe. Sometimes the implications of having been wrong are simply too devastating to admit, and these same psychological dynamics operate in many other realms where most people would never suspect anything like a Ponzi scheme. It is arguable, for example, that the continuous siphoning of U.S. citizens’ income to pay for misguided military interventions abroad constitutes a form of Ponzi scheme. If President George H. W. Bush had never used taxpayers’ dollars to wage the First Gulf War on Iraq in 1991 and to install permanent military bases in the Middle East, then Osama bin Laden would likely never have called for jihad against the United States. If the U.S. military had not invaded Iraq in 2003, then ISIS would never have emerged and spread to Syria and beyond. Such implications are deeply unsettling, and even in the face of mounds of evidence, most people prefer to cling to the official story according to which the 1991 Gulf War was necessary and just, while the terrorist attacks of September 11, 2001, were completely unprovoked, and all subsequent interventions a matter of national self-defense.
The series of bombing campaigns in the Middle East beginning in 1991 are plausibly regarded as a type of Ponzi scheme because the “investors” (taxpayers), have actually paid to make themselves worse, not better, off. Not only have the “blowback” attacks perpetrated in response to U.S. military intervention abroad killed many innocent persons, but the lives of thousands of soldiers have been and continue to be wrecked through dubious deployments abroad. Along with all of the blood spilled, much treasure has been lost. The more than $28 trillion national debt (as of June 2021) is due in part to the massive Pentagon budget, rubber-stamped annually by Congress, to say nothing of the many other “discretionary” initiatives claimed to be necessary in national defense. Afghanistan is a perfect example of how billions of taxpayer dollars continue to be tossed into the wind even as the formal U.S. military presence winds down. The reason why the War on Terror continues on is not because it is protecting the citizens who pay for it or helping the people of the Middle East but because it has proved to be profitable to persons in the position to influence U.S. foreign policy.
One might reasonably assume that anyone who stands to enrich himself from government policies should be excluded from consequential deliberations over what ought to be done, and in certain realms, the quite rational concern with conflict of interest still operates to some degree. With regard to the military, however, there has been a general acquiescence by the populace to the idea that because only experts inside the system are capable of giving competent advice, they must be consulted, even when they will profit from the policies they promote, such as bombing, which invariably increases the value of stock in companies such as Raytheon. Throughout history, there has always been a push by war profiteers to promote military interventions, but Dick Cheney, who served as Secretary of Defense under George H.W. Bush and vice president under his son, George W. Bush, took war profiteering to an entirely new level. By privatizing many military services through the Logistics Civilian Augmentation Program (LOGCAP), Cheney effectively ushered in a period of war entrepreneurialism, beginning with Halliburton (of which he was CEO from 1995-2000), which continues on today, making it possible for a vast nexus of subcontractors to profit from the never-ending War on Terror, and to do so in good conscience. When more people have self-interested reasons for supporting military interventions, then they become more likely to take place.
With the quelling of concerns that conflict of interest should limit the persons who advise the president on matters of foreign policy, the formal requirement that the secretary of defense be not a military officer but a civilian has been effectively dropped, with both James Mattis and Lloyd Austin easily confirmed as “exceptions” to the rule, despite the fact that, not only did both have significant financial interests in promoting war, but each also had a full career in the military before retiring and being invited to lead the DoD. Military men are inclined to seek military solutions to conflict, which is undoubtedly why high-ranking officers are invited to join the boards of military companies, making Mattis and Austin textbook examples of “revolving door” appointments.
Arguably even more ruinous to the republic in the longterm than the rampant conflict of interest inherent to “revolving door” appointments between the for-profit military industry and the government has been the infiltration of the military into academia, with many universities receiving large grants from the Defense Department for research. Academia would be a natural place for intellectual objections to the progressive militarization of society, but when scholars and scientists themselves benefit directly from DoD funds, they have self-interested reasons to dismiss or discredit those types of critiques—whether consciously or not—in publishing, retention and promotion decisions. In addition to the institutional research support provided by DARPA (the Defense Advanced Research Projects Agency), successful academics may receive hefty fees as consultants for the Pentagon and its many affiliates, making them far more likely to defend the hegemon than to raise moral objections to its campaigns of mass homicide euphemistically termed “national defense”.
As a result of the tentacular spread of the military, Cui bono? as a cautionary maxim has been replaced by Who cares? People seem not at all bothered by these profound conflicts of interest, and the past year has illustrated how cooption and corruption may creep easily into other realms as well. Indeed, there is a sense in which today we have two MICs: the military-industrial-complex and, now, in the age of Covid-19, the medical-industrial-complex. This latter development can be viewed, in part, as a consequence of the former, for in recent decades the military industrial complex has sprouted tentacles to become the military-industrial-congressional-media-academic-pharmaceutical-logistics banking complex. Long before Covid-19 appeared on the scene, the Veterans Administration (VA) adopted pro-Big Pharma policies, including the prescription of a vast array of psychotropic medications in lieu of “talk therapy” to treat PTSD among veterans and to preemptively medicate soldiers who expressed anxiety at what they were asked to do in Afghanistan and Iraq. The increase in the prescription of drugs to military personnel generated hefty profits for pharmaceutical firms, allowing them to expand marketing and lobbying efforts to target not only physicians but also politicians and the populace.
Since the initial launch of Prozac in 1986, the pharmaceutical industry has become an extremely powerful force in Western society, made all the more so in the United States when restrictions on direct-to-consumer advertising were lifted by the Food and Drug Administration (FDA) in 1997. Already by 2020, about 23% of Americans (nearly 77 million out of a population of 331 million) were taking psychiatric medications, and those numbers appear to have increased significantly during the 2020 lockdowns, which took a toll on many people’s psychological well-being. As medications are prescribed more and more throughout every sector of society, drug makers exert a greater and greater influence on policy, even as the heroin/fentanyl overdose epidemic, caused directly by the aggressive marketing and rampant overprescription of opioid painkillers, continues on.
Just as the military industry is granted the benefit of the doubt on the assumption that they are helping to protect the nation, the pharmaceutical industry accrues respectability from its association with the medical profession. Who, after all, could oppose “defense” and “health”? In reality, however, for-profit weapons and drug companies are beholden not to their compatriots, nor to humanity, but to their stockholders. War and disease are profitable, while peace and health are not. The CEOs of military and pharmaceutical companies, like all businesspersons, seek to ensure that their profits increase by all means necessary, the prescription opioid epidemic being a horrific case in point. Just as academics may enjoy Defense Department funding, many doctors and administrators of medical institutions today derive essential funding from drug companies and the government, whether directly or indirectly. These connections are immensely important because many politicians receive generous campaign contributions from Big Pharma, which by now has more lobbyists in Washington, DC, than there are congresspersons, and not without reason. Formulary decisions at the VA regarding the appropriateness of prescribing, for example, dangerous antipsychotic medications such as Astrazeneca’s Seroquel to soldiers as sleep aids are made by administrators who are political appointees, as are public health officials more generally.
With a functional Fourth Estate, it would be possible to question if not condemn the conflicts of interest operating in the for-profit military and medical realms. Unfortunately, however, we no longer have a competent press. Throughout the Coronavirus crisis, this has become abundantly clear as alternative viewpoints on every matter of policy have been squelched, suppressed, and outright censored in the name of the truth, when there may have been ulterior motives at play. In fact, the complete quashing of any directives regarding non-vaccine therapies for mitigating the effects of Covid-19—including Ivermectin and Hydroxychloroquine—may be best explained by the simple fact that FDA emergency use authorization of vaccines in the United States is possible only when “there are no adequate, approved, and available alternatives,” as is stated plainly on the specification sheets for the Pfizer and Moderna vaccines.
Regarding the origins of the virus, early claims by some researchers that Covid-19 may have been produced in the virology lab in Wuhan and released accidentally were swiftly dismissed as “conspiracy theories.” Anyone who suggested this eminently plausible origin of the virus was immediately denounced by the media and deplatformed or censored by the big tech giants. “Gain-of-function” research, often funded by the military, involves making existent viruses deadlier to human beings and is said by its proponents to be necessary in order to be prepared for future natural pandemics or in the event that some enemy might use such a virus as a bioweapon. The latter is a familiar line of reasoning among military researchers, invoked also (mutatis mutandis) in nuclear proliferation and the military colonization of space: we must develop the latest and greatest nuclear bombs and effect total spectrum domination of the galaxy before any other government has the chance to do so! Many of the scientists involved in these endeavors may have the best of intentions, but that does nothing to detract from the propensity of human beings to commit errors.
In the case of Covid-19, the origin of the virus was deemed settled because Dr. Anthony Fauci, an ardent apologist for gain-of-function research and the reigning public health guru in the United States, authoritatively insisted that the transition from bats to humans came about naturally. After Fauci’s pronouncement, it seemed a matter of common knowledge to “right-thinking” believers in The ScienceTM everywhere that the virus probably came from the wet market in Wuhan, where live animals were sold as ingredients for use in culinary delicacies such as bat soup. When the World Health Organization (WHO) looked into the matter, they appointed Peter Daszak to lead the investigation. But Daszak had in fact funded gain-of-function research by repackaging and distributing U.S. government funds through his firm EcoHealth Alliance. Needless to say, Daszak had every reason in the world to squelch any suggestion to the effect that he himself may have had something to do with the millions of deaths caused by Covid-19.
We do not yet know whether the virus had a natural or manmade origin, but if in fact U.S. taxpayer-funded research caused the pandemic and millions of deaths, then this would constitute yet another example of a government-perpetrated Ponzi scheme, rivaling and perhaps even surpassing the War on Terror in its negative consequences. We pay for gain-of-function research (determined by bureaucrats such as Anthony Fauci to be a good idea), and then we suffer the consequences when things go awry. Note that, just as Ponzi scheme perpetrators may begin as regular businesspersons before committing fraud, there is no need in the case of Covid-19 to invoke conspiratorial hypotheses. Many politicians who promoted and thereby helped to realize the 2003 invasion of Iraq may have been convinced that Saddam Hussein posed a grave danger to the world. Similarly, there may not have been a conscious intention on the part of anyone to let loose the SARS-CoV-2 (Covid-19) virus on the world. After all, it’s not as though incompetence among government bureaucrats is a rarity.
Whether accidentally or intentionally caused, disasters invariably pave the way for massive power grabs on the part of select persons advantageously situated. Once Iraq had been invaded, this served as the pretext for sacrificing even more blood and treasure as the quagmire intensified and spread to other countries. When the Covid-19 virus arrived on the scene, it became the pretext for a massive and abrupt transfer of wealth. Not only did much of the commerce of small businesses crushed by lockdowns migrate to companies such as Amazon and Walmart, but billions of taxpayer dollars have been poured into pharmaceutical firms.
The multi-trillion dollar Covid-19 aid packages included provisions for research and development, testing, and hospitals. But the most lucrative venture in all of this frenzy has been a vaccine program with universal aspirations. The U.S. government funded the development of the Covid-19 vaccines, and now that they exist, President Biden has purchased 500 million more doses of the Pfizer product to donate to other countries. The global propaganda campaign to vaccinate everyone everywhere with elixirs touted initially by their developers as having up to 95% efficacy, too, has been paid for by governments. It was unclear from the initial press releases about the spectacular new vaccines what efficacy actually meant, as there was a fair amount of equivocation regarding whether the treatments would confer immunity and prevent transmission of the disease or simply lessen the severity of symptoms. After millions of persons had already been vaccinated, it emerged that the reports of 95% efficacy were at best misleading and at worst fraudulent, for the reported percentages were relative risk reduction (RRR) rates, which reflect outcomes only for the small proportion of the population vulnerable to the disease. When the rates are calculated for the general population, the vast majority of whom are not vulnerable to Covid-19, it turns out (as those who declined the vaccine had already surmised on the basis of the survival statistics), that the absolute risk reduction (ARR) rates of the Pfizer, Moderna, Astra Zeneca, and Johnson & Johnson vaccines are quite low, to be precise: 0.84%, 1.2%, 1.3%, and 1.2%, respectively. Nonetheless, aggressive campaigns to require vaccine passports of citizens as a condition on their resumption of normal life are everywhere on display.
A clue that the well-being of patients is not at the forefront of the minds of those running the “vaccine everyone” campaign has been the encouragement of pregnant women and children to undergo vaccination, though neither group is at serious risk from the virus and neither group was included in the trials used to secure emergency authorization. Even more remarkably, against all established science on immunology, the idea that persons who have already recovered from the disease must also “get the jab” has been aggressively promoted all around the globe. Judging by the media coverage, the reason for insisting that persons who were already infected with and have recovered from Covid-19 must also be vaccinated is supposed to be that people can become reinfected with the virus. That line of reasoning, however, is refuted by the statistics for reinfection. As of June 2021, out of nearly 180 million cases of Covid-19 worldwide, there were 148 confirmed cases of reinfection. Studies recently published in Nature and by the Cleveland Clinic conclude that vaccination offers no benefit to previously infected persons.
In the build up to every new war, many people who do not stand to benefit from the intervention and may even be harmed by it often succumb to the propaganda and enthusiastically take up the cause. In the current crisis, the false dichotomization into two exhaustive and mutually exclusive categories, the enlightened science lovers and the anti-vaxxers, is also a part of a propaganda campaign. The persons who have declined vaccination, either because they already survived Covid-19, or because they prefer to wait for longterm safety data and do not believe that the possible benefits outweigh the unknown risks, are dismissed as crackpots, when in fact they are simply being prudent. Yet the media persists in propagating a misleading depiction of vaccine hesitancy in this specific case as proof of hostility toward science. This sort of polarization of the populace is, needless to say, on display during wartime as well, when anyone who dares to oppose a military intervention is depicted as a supporter of a tyrant abroad or an irrational pacifist or, when all else fails, a simple traitor.
It would be incredibly naïve to fall prey to the idea that pharmaceutical executives are somehow philanthropic, for they command enormous salaries for maximizing their stockholders’ profits. In 2020, Pfizer CEO Anthony Bourla enjoyed a 17% increase in compensation, to $21 million, while Moderna’s CEO, Stéphane Bancel, became a billionaire. The pharmaceutical industry and the military industry, despite comprising publicly traded companies, are prime examples of “crony capitalism”, benefiting as they do from large infusions of cash from the government, which is allocated by bureaucrats many of whom have career and other financial interests at stake. Moreover, the funding links between the military and the public health and pharmaceutical sectors form a tangled web. Not only did the Department of Defense receive a chunk of the Covid-19 rescue packages, but gain-of-function research has been paid for by military institutions. Indeed, much of the funding provided to Peter Daszak for redistribution by EcoHealth Alliance derived from the U.S. Department of Defense.
Both the for-profit military and for-profit pharmaceutical industry now use the mainstream media as a propaganda outlet to further the interests of their shareholders. Even the independent media have been infiltrated by pro-military and pro-pharma voices, which is why falsehoods such as “Saddam is in cahoots with Bin Laden and has WMDs!” and “Lockdowns save lives!” are able to gain such traction among the populace. That liberty-restricting policies should be lifted only on the condition of vaccination requires people to believe that the mediation policies were both necessary and effective. But in the United States, the differences in outcomes in various states do not appear to depend on the timing or extent of lockdowns. Nonetheless, just as the mass surveillance and collection of people’s private data was accepted by many as a necessary part of the War on Terror, many persons with no financial interests at stake now rally on behalf of Big Pharma for universal vaccination.
The global propaganda campaign to require people to show health papers or a “vaccine passport” in order to participate in human society—to travel, dine out, shop or even gather together in groups—reveals that the mistakes made by a few actors are being seized upon to exert more and more control over the population. The mass surveillance of Americans was accepted by many as necessary, given the potential dangers of factional terrorism, and now, having spent more than a year whipped up by the media into a paralyzing state of fear for a virus which kills less than 1% of the persons it infects, many citizens appear willing to accept what influential globalists have been insisting must be “the new normal”. This is a grave mistake.
It is too early to know how this unprecedented chapter in human history will end, but the trends are not encouraging. With countries continuing their serial lockdowns, travel restrictions, masking, testing, and quarantine requirements, they deepen the divisions already on display making it seem more likely that some form of apartheid state with totalitarian qualities will emerge. Does any government have the right to force its citizens to undergo a medical treatment for which, according to all available statistical data, they have no need? Why are universities requiring vaccination as a condition of enrollment and employment? Why are more doctors not rising up to challenge the aggressive push to vaccinate everyone everywhere with an experimental treatment? There is no medical basis whatsoever for requiring previously infected persons to undergo vaccination, which has never been demanded in the case of any other disease.
What is at stake is not merely inconvenience, and the solution is not, as some liberty lovers have suggested (if only facetiously), to acquire a forged vaccine passport. We should reject in the most categorical of terms the very idea that anyone anywhere should be required to prove his health status to anyone else and that anyone anywhere should be compelled to undergo a medical treatment against his own will—whatever his reasons may be. One’s medical choices affect one’s health, well-being and body, which no government can be said to own. To relinquish one’s right to one’s own body is to render oneself the property of a tyrannical state. If citizens permit the government to strip them of their right to make decisions about how to lead their very own lives, then they will have been fleeced far worse than the victims of the most mercenary Ponzi scheme, having paid with their freedom for their future enslavement.
Philosophers tend to divide normative theories of morality into two broad categories: deontological and teleological. Deontological theories prioritize right action over good outcomes. If an action is wrong, then it is intrinsically wrong, regardless of the consequences which may ensue. The Ten Commandments and Kant’s Categorical Imperative are classic examples of deontological theories, and the libertarian non-aggression principle (NAP) is another one: Do not initiate violence against any person or damage or steal his property. Teleological theories, in contrast, define rightness in terms of goodness. One determines what to do in part—if not exclusively—by considering the likely outcomes or consequences of one’s prospective action.
Arguably the most famous teleological theory is utilitarianism, articulated by British thinkers Jeremy Bentham and John Stuart Mill in the late eighteenth and early nineteenth centuries. According to the simplest formulation of utilitarianism, what one should do is always act so as to maximize the good outcomes (happiness or pleasure or something else positive—Bentham and Mill called this “utility”), and minimize the bad outcomes (unhappiness or pain or something else negative) for the greatest number of people. Without delving too deeply into what consistently applied utilitarianism would actually entail, the idea seems prima facie reasonable to many, and is appealing to “social justice warriors” and others who believe that the government has and should play an important role in improving the lot of the citizenry through engineering the society in which they live. This basic outlook informs socialist economic theories according to which wealth should be redistributed so that the goods of society are shared rather than “hoarded” by the small percentage of the population comprising the elites.
The theoretical problem with utilitarianism is that there is no hard limit on what can be done to a few people in the name of the net good of the greater group. Everything is, in principle, permissible, depending only on the context and likely consequences. If torturing or killing one innocent person will save the rest of humanity, then it may in fact be the right thing to do, according to utilitarianism. The hypothetical scenarios used to elicit utilitarian responses tend to be highly simplistic, such as the “Trolley problem” discussed in many college ethics courses. One version of the Trolley problem involves a conductor who must decide whether to kill five people (say, senior citizens) on one track, or to divert his car to another track and thereby kill three other people (say, toddlers). Those who devise such thought experiments are attempting to isolate the variables, rendering it possible to gauge sympathies for or against utilitarianism in spite of the inherent complexities of reality.
Because human beings live in societies, the political realm abounds with utilitarian-esque rationalizations for any- and everything. Currently many of those calling for universal vaccination against COVID-19 are reasoning as utilitarians when they presume that the relatively small number of outlier deaths and severe harm caused to a few of those vaccinated will be vastly outweighed by the lives saved. Those who decline vaccination are denounced in the harshest of terms as “selfish,” when in fact they may simply disagree with either the projected result (that millions of people will be saved from the virus and few killed by the vaccines) or else the risk calculation in their own case, based on the statistical data for COVID-19 vulnerability and the complete absence of data on longterm vaccine side effects. That competent individuals alone should make determinations of which risks to assume is a deontological position, denying as it does that “the greater good” is a sound pretext for stripping persons of their liberty and right to control their own body. Forced vaccination would constitute a flagrant violation of the libertarian’s non-aggression principle, so for libertarians who support universal vaccination, the only consistent approach is to persuade others to join them in rolling up their sleeves.
On the economic front, one occasionally finds people today explicitly asserting that humanity would be much better off, for example, if all of Amazon founder Jeff Bezos’s massive wealth were taken from him and used to put an end to world hunger. The people who make such suggestions (when they are serious), appear to assume that the accumulation of wealth is a zero-sum game, and they reject the “trickle-down” economic theories which may inform a more liberty-forward approach. Supporters of a socialist agenda are wont to ignore the lessons of failed experiments such as that of the former Soviet Union, maintaining that if only socialism were implemented correctly, then the world would be a better place. Needless to say, the persons to be harmed in such hypothetical scenarios tend not to agree with what would be the sacrifice of themselves or their property for the greater good of everyone else. Senators Bernie Sanders and Elizabeth Warren, for example, have been known to take aim at Bezos despite the fact that each owns multiple houses but neither offers them (as far as I know) as shelter to persons worse off than themselves. Critiques of the “failure” of Amazon to pay any taxes are especially odd coming from the very legislators who write and ratify laws which permit companies to take advantage of loopholes in order to avoid paying taxes.
In any case, the same critique, that our society tolerates “obscene” disparities in wealth, can be directed toward anyone whose material conditions are significantly better than anyone else’s—which is arguably everyone in the United States, all of whom are better off than most of the people inhabiting third world countries—and yet chooses not to redistribute his own property. As much as caricatures may abound of libertarians as rich old white men unwilling to share their wealth with the descendants of the victims whom their great-great-grandparents oppressed, no one agitating for the mass redistribution of other people’s wealth need be taken seriously unless they make themselves into the extraordinarily rare example of someone willing to invite everyone less off than they are into their own home. Until their comportment is modified to match their rhetoric, the shrill virtue-signaling of Bezos haters and others of their ilk can be safely ignored.
Needless to say, such conflicts between moral rhetoric and reality are ubiquitous. People who denounce manmade climate change sometimes fly to global warming conferences in private jets. Nor do those who incessantly warn about global warming typically renounce their private cars, even when they live in cities with efficient public transportation systems. People who express concern about environmental pollution and the ocean life blighted by plastic waste may nonetheless continue to imbibe water from single-use bottles. That moral rhetoric and reality so often diverge illustrates what is the practical problem with implementing anything even vaguely approaching utilitarianism and is metaphorically expressed by George Orwell’s Animal Farm. The truth is that human beings, as a matter of fact, care much more about themselves and their family members and friends than random compatriots. Moreover, they largely ignore the plight of persons beyond their own borders, even when the taxes levied on their personal income have been used to generate widespread misery abroad. It is utiliarianian-esque reasoning when someone claims that wars may harm some people but on balance serve the aims of democracy and peace. Most of the victims of wars over the past century have been unarmed civilians, not soldiers, but their “sacrifice” is nonetheless reimagined by those who support every new war proposed as having contributed to the establishment of a better world.
The prevalence of this type of rhetoric, and its associated pseudo-moral rationalizations for policies which harm or even destroy other people, explains bizarre phenomena such as Speaker of the House of Representatives Nancy Pelosi’s public expression of gratitude to George Floyd for having been killed by police officer Derek Chauvin. Many people found Pelosi’s statement inappropriate and tone deaf, but she was essentially reciting a version of the same script which is rehearsed every single time soldiers are sacrificed needlessly and so-called collateral damage is “tolerated” in wars perpetrated abroad. Slogans such as “Freedom is not free!” are frequently slung about by military supporters, who assume that, on balance, the comportment of the U.S. Department of Defense has been good, even if mistakes are sometimes made, and even if a few “bad apples” emerge here and there to perpetrate the occasional atrocity, for example at My Lai or in the Abu Ghraib and Baghram prisons. Judging by their docile acceptance of the foreign policy of Bush, Obama, Trump and now Biden, most Americans have yet to acknowledge that the twenty-year “Global War on Terror” (GWOT) has been a colossal failure: politically, economically and, yes, morally. The only people to have benefited from the non-stop bombing of the Middle East are war profiteers. Some people are more equal than others.
The long entrenched dogma that, all things considered, the world is a better place because of U.S. military intervention abroad explains why citizens continue dutifully to pay federal taxes while delegating all policy-making decisions to the legislature, who in the twenty-first century flatly renounced their authority to decide when and where war should be waged. The AUMF (Authorization for Use of Military Force) granted to President George W. Bush in October 2002 has been invoked by every president since then to claim the authority to bomb anyone anywhere in the world where the executive branch of government has deemed such action desirable.
“We are good, and they are evil,” is a time-tested trope which allows government administrators, whether elected or appointed by those elected, to get away with anything, on the pretext that the evil enemy must be defeated, and the perpetrators of mass homicide are acting only and everywhere so as to protect their constituents. Or to spread democracy and save the world from a despicable tyrant, all of which are essentially equivalent, or so the rhetoric goes…In the lead-up to every new war, citizens, having been subjected to vigorous fear-mongering propaganda campaigns according to which their very lives are at stake, tend momentarily to forget that politicians are liars. They listen attentively as quasi-utilitarianism is trotted out yet again to secure popular support for bombing campaigns through soundbites such as: “The war will pay for itself!” “We will be welcomed with flowers as liberators!” “The conflict will be short—in and out—with minimal collateral damage!” When the real consequences prove to be nothing like those projected by hawkish “experts” with financial ties to military industry, the warmakers then revert to defending themselves by appeal to their good intentions.
War advocates are able to sleep at night not because of utilitarianism, according to which the rightness of a war is determined by its outcomes, which any rational and informed person must own have been catastrophic throughout the Middle East, but because they have another theory to whip out in their defense whenever their “good wars” have infelicitous or even appalling consequences. That framework derives from just war theory, specifically, the doctrine of double effect, according to which what really matter, in the grand scheme of things, are the warmakers’ own intentions. “Stuff happens,” explained former Secretary of Defense and sage epistemologist Donald Rumsfeld in assuaging concerns that the conditions on the ground in Iraq were chaotic, with monuments and museums being looted, persons murdered and maimed, robbed and raped, among other unanticipated results of the 2003 bombing campaign.
Policymakers such as George W. Bush, Dick Cheney, Condoleezza Rice, Paul Wolfowitz, and Tony Blair may assuage their conscience by professing the purity of their own, subjective, intentions: “We meant to do well!” Along these lines, ancient Greek philosopher Socrates reputedly quipped, “No one knowingly does evil,” by which he may have meant that everyone seeks what they regard as good and avoids what they regard as evil. What, after all, could they base their actions on, if not their own values? In other words, viewed at the level of individual action, “We meant to do well!” may hold true in the case of anyone who does anything, from the thief who steals to feed his family, to the serial killer who derives immense pleasure from destroying other people, to the warhawks and profiteers who persist in perpetuating and even expanding the War on Terror, though it has already destroyed or degraded the lives of thousands of Americans and millions of persons of color abroad.
Some people are more equal than others is assumed by anyone who claims to wish to even the economic playing field at home while altogether ignoring the plight of the millions of people who are not only not earning $15 per hour for their labor but in fact have been killed as the so-called collateral damage of wars supported or condoned by lawmakers with financial interests at stake. The forever war in the Middle East and Africa plods on with little protest, and some of the very people who vociferously demand justice for individual victims of police brutality such as George Floyd turn a blind eye to the plight of the thousands of victims of the bombing campaigns, despite the fact that the former can be said to derive in part from the latter. Not only does the Federal government set a highly visible example of how to resolve conflict through the continual perpetration of mass homicide, but police departments have been furnished with military equipment and are staffed in many places by veterans of U.S. wars, some of whom apply wartime techniques and tactics in combating crime.
With regard to the killing of persons of color within the United States, we have witnessed former President Barack Obama making public pronouncements on the outcomes of the George Floyd and Trayvon Martin cases, while declining to say anything whatsoever about his very own administration’s targeted killing of sixteen-year-old Abdulrahman al-Awlaki, a U.S. citizen incinerated along with a group of his friends by a missile launched by the U.S. government from a drone flying above Yemen in 2011. If presidents themselves can simply pretend that some of their very own victims never even existed, then it should not be all that surprising when Americans more generally follow their lead.
Self-styled progressives, for example, may agitate for the restriction of firearm possession domestically, while ignoring altogether the exportation of weapons in record numbers (since Obama’s presidency) to regimes and factions in Syria and other places where they are predictably used to harm human beings, primarily persons of color, on a completely different magnitude than occurs within the country where the weapons are produced. It is of course possible consistently to maintain, as do advocates of the right to bear arms, that guns are morally neutral but become implements of murder when wielded by murderers. But anyone who insists that gun possession leads to murder within the United States would seem to be committed, logically speaking, to the position that the many innocent persons killed abroad by U.S. weapons (whether by the U.S. military itself or by governments, factions or individuals armed by them) were, materially speaking, the murder victims of those who furnished the killers with the weapons. And yet, some (not all) of those who dispute citizens’ Constitutional right to bear arms are not only silent on the issue of weapons exportation but in fact complicit in enriching this industry and sowing the seeds for mass homicide abroad through their uninterrupted payment of federal taxes.
A similarly untenable duality would seem to be Senator Bernie Sanders’ outspoken opposition to capital punishment, which he manages to hold within his mind while simultaneously supporting the use of unmanned combat aerial vehicles (UCAVs), or lethal drones, to kill terrorist suspects abroad. One of the most cogent arguments for abolishing the death penalty derives from the indisputable fact that convicted persons are sometimes exonerated posthumously. Mistakes are made, and erroneous executions are irrevocable. An equally compelling argument concerns racial justice. Among all convicted murderers, a disproportionately high percentage of persons of color are sentenced to death, in all likelihood because juries and judges perceive them to be more dangerous than white murderers. But each of these lines of reasoning applies a fortiori to the persons eliminated by missiles launched from drones in countries where nearly everyone is a person of color, and the victims are not even charged with crimes, much less given the opportunity to defend themselves against their killers’ allegation that they are evil terrorists who deserve to die. Why should a suspect have more rights within than outside the arbitrarily drawn borders of a land? If suspects have rights, then does it matter where they happen to stand? And if even convicted murderers should not be executed, as Sanders appears to believe, then how can mere suspects abroad be annihilated on the basis of purely circumstantial evidence such as SIM card data, drone video footage and the bribed testimony of destitute, and therefore corruptible, informants on the ground?
It may be tempting to conclude from examples such as Senator Sanders that lawmakers and the citizens who elect them and pay their salaries are simple hypocrites. It is more charitable, however, and at least as plausible, that they have been trained effectively to compartmentalize spheres of reality so that what seems obviously desirable within one domain has no implications whatsoever for anywhere else. Modern people have been effectively conditioned so as to find nothing wrong with applying completely different standards to different spheres of reality. Their rhetoric may be absolutist, but the moral requirements upon them as individual moral persons are assumed to be a function of the context and circumstances. No less than the politicians who enthusiastically advocate for bombing abroad while decrying police brutality in the homeland, most people appear to hold a motley assortment of arguably contradictory moral beliefs, which they apply to different groups of people according to caprice and mostly determined by what they have been indoctrinated to believe, above all by the media. In effect, modern people have developed split personalities. The innocent victims of Barack Obama’s and Donald Trump’s and now Joe Biden’s perpetual motion bombing campaigns do not exist in the minds of those who ordered or paid for their deaths, and are therefore excluded from all moral calculus.
The smallest sphere of morality, or moral community, comprises one’s self. At this level, morality and prudence coincide. Applying utilitarian reasoning to one’s self alone yields a theory according to which one should maximize one’s own happiness (or pleasure or well-being), even at the expensive of others, because they lie beyond the bounds of the sphere under consideration. The next smallest sphere of morality includes one’s family. After that, one’s friends may be included. Then one’s neighbors, one’s compatriots, and finally humanity. No finite person can perform a full and accurate utilitarian projection of the results of his prospective action on all of humanity, and people generally consider only the short-term effects on the persons with whom they interact and of whom they are directly aware. The answer to the question “What should I do?” will vary greatly depending on whether one considers the moral community to comprise one’s self (ethical egoism) or one’s compatriots (nationalism) or humanity (globalism). Utilitarian-esque rhetoric pervades public discourse because it seems reasonable and sounds “moral” (rather than “selfish”), but most people either do not recognize or do not agonize over the manifest inconsistencies between what they say and what they do in the various communities in which they interact.
Avoiding altogether this morass of moral relativism, the libertarian upholds the non-aggression principle (NAP), which is an easily applicable proscription: Do not initiate—or threaten—violence against other human beings. Period. Do not indulge in casuistic rationalization of why it is supposedly right to bomb countries abroad when in fact there is near certainty that persons of unknown identity (and therefore not known to deserve to die) will be destroyed, no matter what the warmakers’ intentions may be. Libertarians have many outspoken, virtue-signaling enemies these days, but in fact their theory is consistent, including as it does all people everywhere. If it is wrong for government agents (such as police officers) to kill suspects in the homeland, then it is equally wrong for government agents (such as drone operators) to kill suspects abroad.
Most of the federal discretionary budget goes to the military, which is why utilitarian-esque defenses of federal taxation are delusive, especially in view of the twenty-year War on Terror fiasco. Their rhetoric notwithstanding, the policymakers who determine how much to tax citizens and where federal funds are to be allocated prioritize the interests of not humanity, nor their compatriots, but the MIC, or military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex, all tentacles of which have teams of lobbyists in Washington, DC. In order to be completely consistent, then, it may be that libertarians should join the ranks of the war tax resisters, which is however easier said than done, given the harsh and coercive measures deployed by the state, again, in the name of “the greater good.”
Being of a naturally skeptical bent, I have harbored doubts from the very beginning about the upheaval of the entire world rationalized by politicians everywhere because of a virus which kills less than 1% of the people it infects. I watched in amazement as country after country closed their borders to foreigners, imposed “common sense” quarantines, lockdowns and mask mandates, and shut down entire economies. I was perplexed by the inability of anyone in the position to craft policies to recognize that what really needed to be done was to isolate vulnerable persons, allow everyone else to go about their business, and eventually we would achieve herd immunity.
This approach was rejected early on as untenable because, it was claimed, COVID-19 was simply too elusive. In contrast to many other deadly viruses known to mankind since time immemorial, we could not develop herd immunity to COVID-19, because there were documented cases of persons who had become reinfected after having already recovered. To my mind, that was the first red flag that perhaps the virus had not simply leapt from bats to humans when some hapless soul in Wuhan ate a bowl of soup. I started to wonder whether this was not some sort of Frankenstein gain-of-function virus, engineered in a lab by DARPA-funded scientists under the guise of national defense, to figure out what to do in case some other government developed such a virus to wipe out its sworn enemies.
The idea that COVID-19 was developed in a lab and accidentally released by human error was rejected by all of the CNN-certified authorities, so I naturally listened to the science and began focusing on other matters, such as whether the project of inoculating all of the 9 billion people on the planet with a vaccine might be a way of ending the pandemic. There were plenty of companies enthusiastic to pursue this project, and within months Pfizer, Moderna, AstraZeneca, and Johnson & Johnson, in addition to a variety of companies in Russia and China, had already developed their vaccines, having been generously funded by governments so obviously keen to save lives.
Fine, I thought to myself. Now everyone who is vulnerable can get the vaccine, and those who are not can go about their business, become infected and then recover from the virus and its associated symptoms upon robust people, such as the “blah” feeling reported by Tom Hanks upon landing on Australian shores in March 2020 shortly before that entire country closed its borders seemingly forever. There was no question in my mind that we were on the way to the exit ramp of the highway to a dystopic world where no one is allowed to travel or congregate in groups for fear of transmitting the virus to persons who might die as a result. The situation was easy to comprehend by appeal to Pascal’s wager (mutatis mutandis):
The Question of Efficacy in Preventing Transmission and Infection
||Take the vaccine
||Don’t take the vaccine
|The vaccine prevents transmission and infection
Everyone who takes the vaccine will be protected from everyone else—whether or not they take the vaccine
|Those who take the vaccine will be protected; others will remain vulnerable to COVID-19
|The vaccine does not prevent transmission and infection
No one who takes the vaccine will be protected from other people—whether or not they take the vaccine
|No one will be protected—whether or not they take the vaccine
Further doubts, however, began to creep into my mind as I witnessed a variety of zealous public relations efforts to persuade people invulnerable to COVID-19 to get the vaccines. Front and center in luring the public to do what the Centers for Disease Control (CDC) have determined must be done have been COVID-19 guru Dr. Anthony Fauci and vaccine entrepreneur Bill Gates, who incidentally has revealed in interviews his fabulous financial success in the vaccine sector. I think that everyone, on both sides of the COVID-19 lockdown divide, can agree that a twenty-fold return on his investment is nothing to scoff at.
Fauci got right to work promoting the Moderna vaccine by pointing out to African Americans that, in fact, the vaccine was developed by a black woman. This struck me as an odd selling point, and I confess to have suspected racism. I looked up Dr. Kizzmekia Corbett on Twitter and found this on her profile: “Virology. Vaccinology. Vagina-ology. Vino-ology.” Not sure that the latter two count as credentials, but one thing is clear: vaccine hesistancy among African Americans has a well documented and understandable history, resulting in part from the horrifying Tuskegee experiments, in which black men infected with syphilis were left untreated “just to see what would happen.” That’s right: nonconsensual human experimentation was not the province only of the Nazis. It has happened right here, in the United States, as well. In turning Dr. Corbett into something of a media darling, Fauci’s idea appears to have been that people would be persuaded that a black woman would never dream of acting so as to harm other black people. That line of argumentation is unfortunately impugned by the fact that black nurses were among the perpetrators of the Tuskegee study. Indeed, the program coordinator, Eunice Verdell Rivers Laurie, was an African American woman. Nonetheless, Fauci may have succeeded in convincing some people to roll up their sleeves, to wit, those entirely ignorant of the details of the disturbing Tuskegee saga, which lasted a shocking forty years.
My next concern arose when some “experts” began exhorting pregnant women to “get the jab,” insisting that there was no evidence of harm to pregnant women from the new vaccines. I decided to look into the studies done before the emergency authorizations and discovered that pregnant women were not included in the first round of human trials. This finding naturally reminded me of the disturbing story of Thalidomide. That drug seemed very safe in initial clinical trials, which, however, excluded pregnant women. Ultimately, 40% of the babies of women who had been given Thalidomide as a remedy for morning sickness died around the time of birth. Of those who survived, thousands were born deformed, many with fin-like limbs. As is always the case, it took time for the longterm side effects to be sorted out. That is because each patient is unique, with different biological and environmental factors, including the medical treatment in question, acting upon her body. Approved in 1956, Thalidomide was not pulled from the European market until 1961. Why would anyone be encouraging pregnant women to “get the jab,” given the well-documented history of Thalidomide and the apparent invulnerability of infants and small children to the COVID-19 virus? I puzzled. After all, the word teratogen exists because there are substances which predictably lead to birth defects, and they are discovered when, and only when, pregnant women are exposed to those substances. Thinking about the case of Thalidomide and possible side effects provoked another Pascal’s Wager assessment:
The Question of Unknown Side-Effects—Both Short-Term and Long-Term
||Take the vaccine
||Don’t take the vaccine
|The vaccine prevents transmission and infection
Those who take the vaccine will be protected from COVID-19 but may suffer side effects—up to and including death
|Those who do not take the vaccine will not be protected from COVID-19 but will not suffer any side effects.
|The vaccine does not prevent transmission and infection
Those who take the vaccine will not be protected from COVID-19 and may also suffer side effects—up to and including death
|Those who do not take the vaccine will not be protected from COVID-19 but will also not suffer any side effects.
The worst case scenario would be that the “vaccines” do not actually work and also have devastating side effects. Clearly, then, the rational choice for a given person is going to be a function of how vulnerable he or she is to the disease which the vaccines are intended to protect against. If one has a 99.5+% chance of surviving COVID-19, has no known comorbidities and therefore is unlikely to suffer severe illness, even if infected with the virus, then it is difficult to see why he or she would want to opt for the treatment, given that the risk of longterm side effects is entirely unknown—ranging anywhere from 0% to 100%. Fine, I concluded again. People who want the vaccine can get the vaccine, and everyone else can resume their normal life. Yet Fauci & Co. did not agree. I continued to puzzle over pregnant women being enthusiastically exhorted to “get the jab,” and those concerns were exacerbated when vaccine trials on children began, complete with a social media campaign featuring images of “heroic” pro-science kids rolling up their sleeves.
Eventually, after reflecting on this conundrum for quite some time, the firm believer in freedom of choice in me capitulated, concluding that, as in everything else, parents and pregnant women would have to decide what to do for themselves and their offspring. I decided to move on to other matters, as it was obviously futile to engage further with the mobs of people online who have redefined “prudential person” to mean “antivaxxer”. Instead, I turned to the rational grounds for believing that Moderna and its diverse research team have succeeded in producing a COVID-19 vaccine, which is defined by the CDC as follows:
Vaccine: A product that stimulates a person’s immune system to produce immunity to a specific disease, protecting the person from that disease.
Going directly to the source, Moderna’s own website, I learned that the company specializes in gene therapy and has been operational for a grand total of ten years. They received a substantial DARPA grant in 2013, but have no FDA approvals for vaccines or devices to date, aside from the emergency authorization granted in December 2020 for the COVID-19 treatment. All of the COVID-19 therapies, whether m-RNA (as in Moderna’s case) or vector based, have been labeled “vaccines” not only in the hope that they may act as vaccines, but also in order to benefit from the legal immunity enjoyed by vaccine manufacturers in the United States, thanks to the PREP (Public Readiness and Emergency Preparation) act. Anyone who suffers harm as a result of these government-funded elixirs will have to take it up with the government, not the manufacturer. Unlike normal businesses, which must bear the legal brunt of the negative effects of their products upon human beings, Moderna is like a child being allowed to roam free, its parents prepared to clean up any messes which may result. Perhaps Moderna will get lucky and have produced a miracle cure, but the statistics on new medical treatments are not that encouraging. Of 5,000 new drug candidates, only a tiny fraction of them (5 out of 5,000 or .1%) are judged from the animal trials to be safe enough to be tested on human beings. Of those which are tested on human beings, only 20% eventually achieve (regular) FDA approval and are taken to market (.02% of the original candidates). Of those pharmaceutical products which make it to market, some are eventually recalled. From January 2017 to September 2019, 195 drugs previously approved by the FDA were recalled because of safety issues.
Now, many people have died of COVID-19, and no one wishes for that to happen to themselves or anyone they know. It is also true that very ill and vulnerable people are often willing to gamble on experimental treatments. In the case of terminally ill patients, what do they have to lose? It is unclear, however, why any rational person not at risk of death from COVID-19 should want to offer up, without compensation, his healthy body as a Petri dish to a government-subsidized and protected industry with a well-documented history of not only deception and fraud but also what are arguably human rights violations, above all, in third world countries. Moderna, being new, with no products on the market, has a clean slate to date (all none of its products have had no untoward effects on human subjects), but the Pfizer, Johnson & Johnson, and AstraZeneca tallies of criminal fines and settlements are awe-inspiring, to put it mildly. No one ever said that human experimentation was going to be risk free, but the fact that billions of dollars in compensations have been doled out to people harmed by pharmaceutical and other chemical companies underscores a sober truth: it is inherently dangerous to introduce novel foreign substances into human bodies, even in the best of all possible research and development scenarios.
The spec sheets for both the Pfizer and the Moderna shots explicitly state that they “may” prevent one from getting COVID-19 (which implies, of course, that they may not), and that “There is no FDA-approved vaccine to prevent COVID-19.” These information sheets (which hardly anyone rolling up their sleeves appears to have read) also state plainly that “Serious and unexpected side effects may occur,” which should in any case be obvious since they were developed and tested over a course of months, not years (note: the average time to market for a new drug/device is 12 years). There simply is no longterm data yet—whether positive or negative. The makers themselves of these products rightly express ignorance as to their efficacy in preventing and transmitting disease, touting confidently only their therapeutic effect in reducing severe symptoms and diminishing the likelihood of death, both of which are in any case exceedingly rare for persons under the age of 50, according to all available statistical data. Feeling “blah” does not count, I presume, as a “severe symptom,” so it is unclear whether vaccination would have helped Tom Hanks at all. But who knows? One or more of these companies may succeed in producing a COVID-19 panacea, I mused. Until I remembered the problem of new virus variants.
The current slate of vaccines were developed against a dominant strain of COVID-19 last year, but the many variants, created through mutation and apparently numbering in the thousands, are by now so widespread that there are grounds for believing that even if the current vaccines work against the dominant strain, and even with strong vaccine compliance, vulnerable people will continue to die, sooner or later, while everyone else will be spared, not because of the vaccines, but because they were never vulnerable to the virus and its variants in the first place. As is always the case, given human variability, there have been some outliers, young persons who died or suffered harm from Coronavirus infection. On the other hand, more elderly people than one might surmise, given the media coverage, have survived. COVID-19 does not come close to being a death sentence, although the chances of dying are significantly increased for patients with comorbidities. Still, in some places, the average age of a COVID-19 victim is the same or even older than the average life expectancy of people more generally.
Curiously enough, persons who already survived COVID-19 are also being exhorted to get the vaccines, even though the very fact of their ongoing existence definitively demonstrates that their immune system is hardy enough to combat the virus. For other diseases caused by viruses and for which vaccines exist, the reason for getting the vaccine is to avoid at all costs getting the disease, which in cases such as Ebola and Yellow Fever are very deadly to anyone, regardless of age or comorbidities. But the vast majority of people infected with COVID-19 experience only mild symptoms and do not require medical treatment. Reflecting on these matters, I circled back to my previous concern: Why should any healthy person believe that taking an experimental vaccine is a good idea, particularly if they already survived COVID-19?
As I continued to mull over this question, I marveled at the massive media marketing budget for COVID-19. All of the circular stickers on the ground and all of signs everywhere relaying important information such as the permitted capacity of persons inside stores, all scientifically calculated to three significant figures to yield numbers such as the 163 shoppers admitted to the local TJMaxx at a time. Even more impressive have been the ads on television and the internet everywhere encouraging people: “This is our shot. Let’s take it!” among a slate of similarly benevolent-sounding slogans. People may feel better when others hop aboard the vaccine train, and they may attempt to shame those who do not, but does any of this behavior have anything to do with whether or not the treatments will ultimately work? It seems safe to say that neither the virus nor the vaccines have any interest in the hopes and aspirations of human beings. Ironically, the pressure being put on people—threatening the requirement of vaccination for travel, work and play, and the lavishing of praise upon those willing blindly to accept as-of-yet unknown risks—appears to be having the opposite of its intended effect.
If it were so obvious that the vaccines worked and were the only solution to our current predicament, then why would Queen Elizabeth take to the airwaves to denounce people who refuse to get vaccinated as “selfish”? Why would Tony Blair insist that we will not be free again until vaccine passports become available? Why did former Presidents Bill Clinton, George W. Bush and Barack Obama team up to produce a video in which they attempt to persuade people to get the vaccine? (Bush states in the ad, “The science is clear.” He was equally confident about Saddam Hussein’s WMDs.) Why would CNN be admonishing those congresspersons who have declined the vaccines made available to them, including those such as Representative Thomas Massie who have already recovered from the virus and therefore must have developed antibodies and T-cells in response? On its face, all of this propaganda seems vaguely insane, and it is scaring people away who might otherwise have agreed to participate in the experimental trials.
Sowing doubts even more effectively than appeals by confirmed liars in high places, more than twenty countries, including France, Germany, Italy, Norway, Finland, Thailand and, most recently, Canada, halted their distribution of the Oxford/AstraZeneca vaccine in response to a number of blood clot cases. When the cases in Norway were first reported, the trusty mainstream media went into overdrive, dismissing “baseless” claims of connections between the blood clots and the vaccine. It seemed strange to me that over the course of the past year, every person who died with COVID-19 was recorded as having died of COVID-19, while no one who died after vaccination was acknowledged to have been killed by the vaccine. The in some cases deadly blood clots were “purely coincidental” was the judgment decreed by journalists onboard the vaccine train (before the matter was even investigated) and echoed by parrots throughout Facebook and Twitter to assuage the fears of persons who might be discouraged by the news from rolling up their sleeves. Even after the AstraZeneca vaccination resumed in most of these countries, some of them changed their guidelines. France, for example, having initially claimed that the AstraZeneca vaccine showed no benefits to elderly persons, reversed course to decree that the vaccine should only be used on persons over the age of 55. Canada, for its part, announced that they would be administering the AstraZeneca vaccine only to persons between the ages of 50 and 65. The governments which stopped and then resumed vaccination claimed that they had done so out of “an abundance of caution,” but when some scientists concluded that there was indeed a connection between the blood clots and a rare autoimmune response elicited by the vaccine, they also jubilantly reported that they had found a possible cure for that problem. By all means, take the AstraZeneca vaccine, and if you develop blood clots in your brain, then we’ll give you some other treatment to save your life! (If you have no Big Pharma stocks in your portfolio, now might be the time to buy.)
Many businesses have joined in on the public relations campaign and are rising to the challenge of convincing their customers that vaccination is the way to go. Qantas, the largest Australian airline, has adopted the punitive approach, alerting everyone everywhere that they will not be boarding any of their planes without first presenting proof of vaccination. But one company has gone above and beyond to offer what may finally be needed to convert the intransigent skeptics: Krispy Kreme. The doughnut giant has announced that anyone presenting proof of vaccination at any of their stores will be entitled to a free doughnut. Mind you, this is not a one-off promotion. Every vaccinated person is being offered a doughnut every single day that they show up at any of the Krispy Kreme locations with their trusty vaccination card in hand. Needless to say, this propitious development necessitates a revision of the Pascal’s Wager assessment:
To Vaccinate or Not to Vaccinate?
||Take the vaccine
||Don’t take the vaccine
|The vaccine prevents transmission and infection
Those who take the vaccine will be protected from COVID-19 and will receive a free doughnut every day.
|Those who do not take the vaccine will not be protected from COVID-19 and will not receive a free doughnut every day.
|The vaccine does not prevent transmission and infection
Those who take the vaccine will not be protected from COVID-19 but will receive a free doughnut every day.
|Those who do not take the vaccine will not be protected from COVID-19 and will not receive a free doughnut every day.
Luckily there are Krispy Kreme doughnut shops dotting the vast landscape of the United States, and, more importantly, there is one down the street from me. My fate, therefore, along with that of thousands, if not millions, of my fellow citizens (including, I presume, Representative Massie) is now sealed. I will be rolling up my sleeve, not because I believe in the novel m-RNA vaccines, nor because I think that it is in my best interests to undergo an experimental treatment for a disease to which I am not vulnerable and from which I have already recovered, nor because George W. Bush and Tony Blair want me to, nor because I care what Queen Elizabeth thinks of me, nor because the only way I can ever travel to Australia again will be to “get the jab.” No, I will be rolling up my sleeve for the sole purpose of receiving a free doughnut every day henceforth. I trust that, in recognition of the Krispy Kreme executive team’s manifest magnanimity, the government will confer upon their company the label “essential business” to protect it from revenue loss in the event of any future lockdowns.
Biden bombs Syria. Mass shooters kill Asians in Atlanta and white people in Boulder. Connect the dots.
America’s Culture of Killing: It Doesn’t Begin at Home
For many years, male U.S. citizens have been required to register with the Selective Service, an independent agency within the Executive Branch of the U.S. Federal Government, so that they can be located in the event that it becomes necessary to reinstate military conscription. The most recent military draft was ended after the Vietnam War, in 1973, and ever since then people have proudly pointed to the “voluntary” terms of U.S. military enlistment. That soldiers are voluntary is also frequently invoked in passing by cynical civilians who dismiss complaints about the plight of soldiers during wars and their aftermath. Those wont to insist, “They freely chose to enlist!” not-so-slyly suggest that perhaps we should not care so much about the thousands of homeless veterans and the epic levels of suicides among distraught soldiers, who by 2019 were ending their lives at rate of about twenty per day.
In recent years, the question whether women should be permitted to serve as combatant soldiers has arisen, as more and more other professions have opened up to what historically has been regarded as “the gentler sex.” Until quite recently, the fighting forces of the military were always viewed as the province of men, but times have changed, causing some people to reconsider the longstanding association of the military with masculinity. There are essentially two standard arguments regarding female combatants.
First, according to what might be called the “traditionalist” approach, women are generally smaller and physically weaker than men. Their admission into the ranks alongside the physically strong males who have fought enemy soldiers one-on-one on bloody battlefields throughout history would severely compromise the military’s capacity to win its battles and, ultimately, the government’s wars. A second strand of the traditionalist view focuses on the idea that women should not be sacrificed needlessly. Women have historically been viewed as nurturing and less aggressive than men. If women were deployed evenly among the men fighting on the ground, then they would be more likely to perish than their male counterparts, not only because they are, on average, physically smaller and weaker, but also because they are less violent than men. But if women were eliminated, this would hurt society more generally, as women give birth to and often raise children.
The second approach, which might be termed “feminist,” holds that combatant selection should in no way depend on one’s possession or lack of a Y-chromosome. It may be the case that women on average are weaker and smaller and less aggressive than men, but that does not mean that all of them are. Over millennia, women have far more often filled the role of mother than that of breadwinner, but, again, times have changed. Today a woman can choose whether or not to be a wife and mother. Some women today serve as the CEOs of military weapons companies or even heads of state. What it means to be a liberated woman is to be able to choose between the full range of opportunities available to men. Furthermore, there are certainly examples of extremely powerful women, such as Serena and Venus Williams, who might, if they chose to fight rather than play tennis, do quite well on the battlefield. Accordingly, on this view, women should be permitted to train and compete with men for spots in even the most physically demanding of military roles, up to and including the Marines or special operations teams such as the Delta Force. The way to find out whether a woman qualifies for such a force is precisely the way in which men find out whether they qualify: through basic and advanced training which leads some candidates or their commanders to conclude that they may be better suited for less arduous roles.
In 2015, the Pentagon appeared to adopt the second, more progressive or feminist, approach, announcing that all combatant positions would henceforth be open to women. The reality, I believe, is quite a bit more crass, as evidenced by the fact that not long after women were invited to serve as combat warriors, people began discussing whether women should, along with men, be required to register for the Selective Service, so that they, too, could be called up should another military draft be instituted. This move, from permissibility to obligation, from a triumph of feminism to the severe restriction on liberty and potential enslavement of women, the prospect of their being demanded to serve in the armed forces against their will, is a curious non sequitur which seems to have gone unnoticed by the soi-disant feminists who support Selective Service registration for all. The Pentagon public relations wing naturally claims “woke” creds, but what is really going on here?
I am afraid that the traditionalist approach (which still has its adherents, for example, Fox News host Tucker Carlson), altogether misses the point of the Pentagon’s invitation to women to join the ranks of military killers. For most “combatants” in future war will not be found on the ground battling enemy soldiers in one-on-one fights to the finish. Instead, unmanned combat aerial vehicles (UCAV), or lethal drones, will continue to be used, as over the course of the twenty-first century so far, to inflict death upon enemy “soldiers” who pose no direct threat to their killers. The risks in having both men and women fight in theaters such as the twenty-year War on Terror throughout the Middle East (which has also seeped into Africa) will become progressively less physical. Because of new technology, the primary harms suffered by future soldiers will be psychological and moral. This follows from the very logic of the use of drones to kill people abroad who cannot be threatening anyone with death because they are unarmed. Least defensible of all is the incineration of persons located in countries where there are no soldiers on the ground said to require force protection. Yet this is what drone operators are trained and required to do.
One of the most significant military discoveries in the twenty-first century, all but ignored by the warmakers themselves, is that Post Traumatic Stress Disorder (PTSD), does not emerge exclusively or always as a result of traumatic experiences on the battlefield, when soldiers are forced daily to face the specter of their possibly imminent deaths as they witness people dying all around them and move through dangerous territories where IEDs (improvised explosive devices) and snipers may be hiding any- and everywhere. Protracted fear and stress can be powerful factors in the onset of PTSD, but what we have learned from its high incidence among drone and laser sensor operators is that moral trauma and conscience also play an important role. Indeed, regret for what one has done is sufficient alone to induce profound PTSD, as evidenced by those drone operators who, in states of psychological and moral despair, have opted to abandon the profession at the termination of their initial contract, even when they have been enticed to stay by the provision of generous bonus offers.
On its face, the job of a drone operator may look like a good deal, and it did to those who later regretted and renounced their vocation: garner creds as a courageous warrior by donning a uniform and showing up to work in a trailer where one “fights” the enemy on a screen from thousands of miles away. No trenches, no IEDs, and no snipers—the drone operator himself remains unscathed, indeed, untouchable by the enemy. The physical job of a drone operator involves manipulating buttons and levers, observing the enemy on the screen and remaining alert, not as a way of saving one’s own life, but to make sure that the enemy does not get away. The images of what these soldiers see on those screens and have done to those people, however, sometimes come to haunt drone operators. Watching targets for hours, days, weeks, even months, before “splashing” them with a missile and witnessing them bleed out before dying, knowing in some cases that they are leaving behind widows and orphans, if not also first-order (physical) collateral damage, exacts a steep psychological toll on some of the push-button killers.
The military will continue to become progressively more lethal to the enemy but less deadly to its own combatant or killing forces because of the manifest rationality of not needlessly risking soldiers’ lives, and the development of technology which makes that possible. If a war can be won without sacrificing a single soldier, as former Secretary of State Hillary Clinton claimed President Barack Obama did when he ordered hundreds of missile strikes on Libya in 2011, then why would any commander choose to do otherwise? This risk-averse approach to war began in earnest with President Bill Clinton, whose combat pilots flew high above their targets in Kosovo in 1999 in order to protect themselves from harm, despite the fact that by doing so they increased the risk of killing civilians on the ground. Presidents, along with the populace, care more about their compatriots than “collateral damage” victims abroad, who, being out of sight, are also out of mind.
The Libya intervention was quite far from being a success story, much less an example of, as Clinton gushed, “smart power at its best,” but it is true that no combatants were killed during the 2011 ousting of then-President Muammar Gaddafi. Ironically, U.S. State Department employees were killed in the post-war mêlée, but that was after the bombing had stopped. The country of Libya is now in shambles, but the Benghazi debacle, along with everything else which ensued subsequent to the bombing campaign, is simply written off by its architects, including former U.S. Ambassador to the United Nations Samantha Power (recently pegged to head up USAID), as unpredictable, unforeseeable consequences of a military intervention with purely humanitarian aims. In attempting to convince Obama to take action, Power compared the situation in Libya to that of Rwanda in 1994. Remarkably, having been initially disinclined to intervene, Obama was persuaded to believe Samantha Power, Susan Rice, Hillary Clinton and Anne-Marie Slaughter, the women who rallied for that war—a veritable case in point for those who claim that women can be just as aggressive as men. But was the post-war scene in Libya completely unforeseeable and unpredictable, as Power glibly maintains in her memoir? We may beg to differ with those armchair warriors who failed to draw appropriate inductive conclusions from the fall of the Taliban in Afghanistan or the removal of Saddam Hussein in Iraq, but, alas, they seem keen to ply their bellicose trade wherever and whenever it becomes possible again.
Thousands of people at the Department of Defense work full time in public relations, producing texts and media to persuade taxpayers that the government’s wars are just and right. One might reasonably wonder why, if all of the ongoing wars were in fact worthwhile and necessary undertakings, there should be any need for public relations campaigns to support them, or to lure young people to enlist. But because the necessity and justice of the nonstop bombing of people in the Middle East is far from self-evident, those paying for the carnage must continually be made to believe, against all evidence, that the soldiers killing people abroad today are just like the courageous men who defeated the Nazis in World War II. Snafus such as the photographs from Abu Graib prison must be explained away, and the military’s image re-burnished to ensure that young people will continue to enlist.
The “feminist” turn at the Pentagon, I submit, is just another ploy to address the recruitment crisis at a time in history when the skills required of the latest supply of cannon fodder have become significantly less physical. More drone operators are trained today than regular combatant pilots, and at some point the idea of risking one’s own life for one’s country will be deemed anachronistic and quaint. Robots with “boots on the ground” have been deployed for years, especially to assist troops in landmine-infested territories. “Ground force” robots have also been used to blow up targets, as was done, unbelievably enough, to U.S. military veteran Micah Xavier Johnson, in Dallas, Texas, on July 8, 2016, after he killed five members of the local police force. The replacement of mortal soldiers by robots will be further precipitated by the inexorable production of Lethal Autonomous Weapons Systems (LAWS), which will take human beings completely out of the killing loop once robotic killers have been programmed to gather, sort, and analyze data before selecting targets and launching missiles. Until the military has become entirely automated, however, it will continue to need human operators, and that is why women have been enthusiastically invited to join in on the killing spree.
The invitation to women to serve in combat forces has been billed as progress, evidence of how “woke” the Pentagon is, along the lines of President Biden’s appointment of the first African American Secretary of Defense, General Lloyd J. Austin. But, as in the case of Austin, the admission of women into combat forces has a subtext. The far more relevant factor in the case of Austin is his connection to military industry, the fact that he is a former board member of a company (Raytheon) which stands to profit every time Syria or anywhere else is bombed. The surface “wokeness” is just a patina, a veneer, a bit of public relations polish on what is ultimately an intrinsically pragmatic policy. The fact that Austin is black is an effective distraction from the reality of the ever-more tentacular MIC or, to be precise, the military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex. Military industry, which is funded by the Pentagon, has also gloated over its female CEOs. Meanwhile the crisis levels of sexual abuse by fellow soldiers and commanding officers of female enlistees has been largely ignored by the military-infiltrated mass media.
The admission of women troops as combatants is not so much an affirmation of the worth of female human beings as it is a recognition that they, too, can be trained to serve as push-button contract killers. There is an ongoing, chronic military recruitment crisis because service in the bungled missions in Afghanistan and Iraq has seemed progressively less honorable as the quagmires have dragged on. Many people were willing to enlist after the terrorist attacks of September 11, 2001, but by now nearly no one (aside from war profiteers) seems convinced of the righteousness of the forever wars in tThhe Middle East. In order for those wars to continue on, new, psychological, cannon fodder must be found. Step right up, ladies, we have a splendid job for you, complete with benefits, pension and paid maternity leave!
The issue of maintaining Selective Service registration for men alone is now before the U.S. Supreme Court, and it would seem that, in consistency, the entire program must either be abolished or expanded to include women. Under a faux-feminist guise, some “patriots” among the U.S. Congress (an extremely important limb of the octopoid MIC) will likely rally for the expansion, which would be a severe blow to liberty lovers of all stripes, men and women alike. If it is unconstitutional to require men but not women to register for the Selective Service, now that women are permitted to serve in the armed forces, then the proper remedy can only be to abolish the Selective Service registration requirement, for involuntary service violates every person’s right to life, liberty and the pursuit of happiness, whether or not they possess a Y-chromosome.
The American Civil Liberties Union (ACLU) has raised the issue before the Supreme Court on behalf of a group of men, and it may well be that they favor abolition of the requirement. Nonetheless, should the current law, under the present circumstances, be struck down as unconstitutional, then the fact that the Supreme Court did not previously find the Selective Service registration of males alone to be unconstitutional will be invoked by hawks in the U.S. Congress to push for new legislation mandating universal registration, regardless of biological sex. The question which needs desperately to be debated now, however, is whether the creation of an entire society of push-button contract killers is something which anyone should support.
People often express consternation over how something as awful as the Holocaust could ever have transpired. It seems utterly incomprehensible, until one reflects upon the acquiescence to government authorities of individuals, most of whom served as unwitting cogs in a murderous machine. The vast majority of people in 1930s and 1940s Germany went about their business, agreeing to do what officials and bureaucrats told them to do and brushing aside any questions which may have popped up in their minds about policies preventing Jewish people from holding positions in society and stripping them of their property. For ready identification, Jews were preposterously made to stitch yellow stars onto their clothing. Later, in the concentration camps, they were tattooed with identification numbers. The rest is the most grisly episode in human history.
It is easy to say today, looking back, that we would never have supported the Third Reich and its outrageous laws, but citizens everywhere develop habits of submission to authority from an early age. Many “rule-governed” persons never pause to ask whether the current laws of the land are in fact moral, despite the long history of legislation modified or overturned in the eventual recognition that it was deeply flawed. It is understandable that people should obey the law—they are threatened with punishments, often severe, for failure to comply. But the little things do eventually add up, and one thing leads to another, with the result that the bureaucratic banality of evil diagnosed by Hannah Arendt in her coverage of the Adolf Eichmann trial in 1960 applies every bit as much to our present times as it did to the people going along to get along with the Third Reich. Of course no one is currently sending trainloads of “undesirables” to concentration camps for liquidation, but when one considers the death and degradation of millions of people in the Middle East over the course of the twenty-first century, carnage and misery funded by U.S. taxpayers, one begins to comprehend how the very mentality which permitted the Holocaust to transpire is indeed at work today. The vast majority of Western citizens freely agree to pay their governments to terrorize and attack, even torture, people inhabiting lands far away. The perpetrators call all that they do “national defense,” but from the perspective of the victims, the effects are one and the same.
The banality of evil at work today involves a profound complacency among the general populace toward foreign policy. President Biden bombed Syria about a month after becoming the Commander in Chief of the U.S. military, without even seeking congressional authority, and people barely blinked. The elimination of the persons responsible for the terrorist attacks of September 11, 2001, was achieved long ago. Yet military intervention continues on inexorably, having come to be regarded as the rule rather than the exception. The “collateral damage” victims are essentially fictionalized in the minds of the citizens who pay for all of the harm done to them. Habits of deference to the Pentagon and its associated pundits on matters of foreign policy have as their inevitable consequence that confirmed war criminals are permitted to perpetrate their homicidal programs unabated, provided only that they claim to be defending the country, no matter how disastrous their initiatives proved to be in the past. Indeed, it is difficult to resist the conclusion that the more mistakes a government official makes, the more likely it becomes that he or she will be invited back to serve again, and the more frequently his or her opinion will be sought out by mainstream media outlets.
It requires a type of arrogance to reject the proclamations of the anointed “experts,” and in the age of social media, there are always thousands of shills—both paid and unpaid—standing by to defend the programs of the powerful. Antiwar activists are very familiar with how all of this works. They are denounced as anti-patriotic, ignorant, naïve, and even evil for refusing to promote the company line. During the Cold War, the reigning false dichotomies of “Capitalist or Communist?” and “Patriot or Traitor” held sway and, sad to say, such false dichotomies abound today. The fact that the pundits and policymakers calling for and applauding military intervention themselves often stand to profit from the campaigns they promote is brushed aside as somehow irrelevant. In contrast, antiwar voices are muted, suppressed, and censored despite the fact that reasons for opposing more war cannot be said to be tainted by mercenary motives because peace, unlike war, does not pay. It costs nothing to not bomb a country, so anyone who speaks out against the idea is not doing so in order to profit. Yet such persons are denounced and marginalized in the harshest of terms as cranks, crackpots, extremists, Russia sympathizers and more. President Obama’s drone killing czar John Brennan famously organized terror Tuesday meetings at the White House where “suspicious” persons were selected for execution by unmanned combat aerial vehicles (UCAV), aka lethal drones, on the basis of flash-card presentations crafted from bribed intelligence, drone video footage and cellphone SIM card data—all of which is circumstantial evidence of the potential for future possible crimes. Brennan recently included libertarians among what he warned is an “unholy alliance” of “domestic extremists” in the wake of the January 6, 2021, protest at the U.S. Capitol. What happens next?
One certainly hopes that educated people are aware that Brennan’s inclusion of libertarians among his list of potentially dangerous domestic enemies betrays his utter ignorance of the very meaning of the word ‘libertarian.’ The non-aggression principle (NAP) embraced by libertarians precludes not only wars of aggression but also individual acts of terrorism. Sadly, it has become abundantly clear that the people still watching television news continue to accept and freely parrot what the mass media networks pump out despite their clearly propagandistic bias in recent years. Accustomed to heeding the prescriptions of “the experts,” people blithely listen to Brennan (and those of his ilk) despite his manifest record of duplicity regarding the drone killing campaigns, and his histrionic, even hysterical, comportment during the three-year Russiagate hunt for a Putin-Trump connection.
Neoliberal and neoconservative powerbrokers naturally wish to quash alternative viewpoints, so perhaps no one should be surprised that Brennan has attempted to discredit libertarians. After all, they pose disturbing questions such as whether all of the mass homicide carried out in the name of the nation actually helps anyone, including those paying for the carnage, or rather harms everyone, with the notable exception of those who stand to profit financially or politically from the wars. What Brennan revealed by lumping libertarians together with “domestic terrorists” is that he is not so much concerned with violent threats to the nation but with dissent from the political and warmaking authorities, a tendency which is becoming more and more marked as the Democratic-controlled Congress attempts to force Big Tech companies such as Facebook and Twitter to “do more” to prevent the dissemination of so-called disinformation. By denouncing some of the most articulate, consistent and persistent opponents to the war machine as “dangerous,” Brennan made it more difficult than it already was for those voices to be heard much less heeded.
The current complacency of people toward U.S. foreign policy is nothing new. Contemporaneously, people any- and everywhere tend to go along to get along, whether or not they are convinced that the policies imposed upon them and their fellow citizens make any sense. In 1930s Germany, anti-semitism was real, but part of the reason for the efficacy of the nationalist fervor drummed up by Adolf Hitler and used to support his quest for total global domination was the dire economic situation following the loss of World War I. Germany was weak and its people hungry. These conditions made it easier than usual to persuade people to comply, in the hope that their lives would be improved by banding together against what was denounced at the time as the evil enemy.
This perennial Manichean trope of political propaganda has most recently emerged in the abject, overt, hatred by about half of the people of the United States of anyone having anything whatsoever to do with Donald Trump. “Trump Derangement Syndrome,” or TDS, is a genuine phenomenon, at least judging by the comportment of people online and sometimes in person as well. As bizarre as this may seem, people actually hate people who do not hate Donald Trump, having failed to understand that contradictions and contraries are not one and the same. It is entirely possible to not hate Trump while also not loving him, but attempting to elucidate this false dichotomy to anyone who spent the last four years of his life wishing fervently for the former president’s demise will be met with an even more strident repetition of the very dichotomy being debunked. Again, if you happen to believe that the post-presidential impeachment trial was a waste of time and taxpayer money, then you must, according to the anti-Trump mob, love the former president. Even more remarkably, somehow over the course of the past four years a large swath of people have come to believe that seething hatred is a moral virtue, so long as it is directed at appropriate objects of loathing. But the capacity to hate one’s fellow human beings reveals absolutely nothing about the hater beyond his or her ability to hate. It certainly does not mean that they are good by contrast, and it is no mean feat of self-deception to come to believe that because one hates Donald Trump, this alone suffices to establish one’s moral superiority over all of the people who do not.
Once people become convinced of their own moral righteousness in the battle against whoever has been designated the evil and benighted (deplorable!) enemy, then it’s only a few short steps from “The end justifies the means” to “Everything is permitted.” A glaring example has been the more and more prevalent suppression and erasure of so-called disinformation, which of course lies in the eyes of the censors. The necessity of defeating “the enemy” became the basis for such curious developments as the refusal of any of the mass media networks to investigate the pay-for-play connections suggested by the contents of the Hunter Biden laptop made public during the 2020 presidential election cycle. Immediately following election day, when some people pointed out anomalies such as the appearance of vertical lines in the graphs of vote tallies in the middle of the night in multiple states—indicating the sudden addition of troves of votes none of which were for Trump—the mass media immediately, in concert, issued headlines everywhere proclaiming that any and all charges of electoral fraud were “baseless”. The point here is not that the charges were not baseless, which perhaps they were in some cases—those explained away by local election authorities as clerical errors. But no one could know that allegations of electoral fraud were baseless before the matters were investigated.
The slippery slope of censorship is difficult to resist, having taken the first step onto that totalitarian-veering path, and the removal from social media of thousands of conservative and right-wing accounts regarded as sympathetic with Trump and his gallery of rogues is simply not enough, according to Democratic Party elites. Despite having already propagandized much of the mainstream media (as was evident in the election and post-election coverage), the Democrats, giddy with their majority Blue-Blue-Blue capture of Washington now wish to exert total control over what people may say, write and read. This is of course a violation of the First Amendment of the Constitution of the United States, but by achieving their goal through the indirect manipulation of private companies, which are subject to federal regulation and therefore receptive to “innuendos” on the part of legislators, they are hoping that no one will notice what has transpired—at least not before it is too late to do anything about it.
After Trump’s acquittal in the second Senate impeachment trial, the news coverage claiming that he had incited “insurrection” at the Capitol continued on, as though the facts had already been established and the outcome of the trial was entirely irrelevant. These Associated Press (AP) excerpts are typical:
“The only president to be impeached twice has once again evaded consequences…” (February 13, 2021)
“After [Trump] incited a deadly riot at the U.S. Capitol last month…” (February 14, 2021)
One might with reason wonder whether the wrongness of questioning the outcome of an election does not imply the wrongness of questioning the outcome of a trial. Of course both are perfectly permissible in a society which champions freedom of speech. What this political control of the news reveals is a republic in crisis, for if even supposedly objective news outlets such as the Associated Press reject the outcome of processes intended to ascertain the truth, then the people have no way of being able to determine what actually transpired. Similar examples of journalistic léger-de-main abound in every area of importance to neoliberals, above all, in matters of war, and the mainstream media’s refusal even to discuss the plight of Julian Assange is a case in point. Assange made public evidence of war crimes committed by the U.S. government but is now being persecuted as though he were a murderer. So pathological has the mainstream press become that the only times they were able to bring themselves to praise Trump was when he ordered military strikes on the people of the Middle East.
The tech outlets have now also decided to censor alleged disinformation about the experimental mRNA COVID-19 vaccines, conflating the criticisms of persons opposed to all vaccines (the antivaxxers) with those of persons who have read the spec sheets, are aware of the data on disease prognosis, and find that the risk of possible, as-of-yet unknown, longterm side effects are not outweighed by the alleged benefits of the novel technology (which, it is worth pointing out, never made it past the animal trials when it was tested in the past). Those who express concern about the Procrustean lockdowns have also been subjected to suppression of their speech. The Facebook page for the Great Barrington Declaration was taken down by censors, and Robert F. Kennedy Jr.’s Children’s Health Defense organization has also been deplatformed. But the criticisms offered by these groups are grounded in scientific literature. Indeed, the authors of the Barrington Decree are in fact epidemiologists and public health scientists, yet they are summarily dismissed as quacks because they disagree with the Fauci-Gates program.
What the vast majority of people want is for the current abnormal situation to be stabilized. If that means embracing what the powers that be are calling “the new normal,” then so be it. Anyone who stands in the way of the needed changes—those who refuse to volunteer as unpaid subjects in the largest experimental trial of a novel medical device in history—are summarily denounced in the usual terms: selfish, deplorable, ignorant, inbred, racist, nutjobs, etc. It does not matter in the least whether any of the epithets are true. They are deployed indiscriminately against anyone who disagrees by the self-styled morally superior types who shill for the reigning political and corporate elites—often also for free.
The present circumstances offer the necessary prerequisites to totalitarianism. We would do well to heed the historical record and look closely at how Nazism and Stalinism became dominant outlooks for entire populations, despite the fact that large numbers of people were destroyed by them. The total control of the mainstream media, with a specific agenda being promoted, all alternatives suppressed and the extreme polarization of citizens under Manichean false dichotomies are everywhere on display. What’s more, in these COVIDystopic times, we are witnessing people struggling under the same economic hardships as were the people of 1930s Germany. What is worse, after a full year of nonstop television coverage of death tolls, with nearly no effort by any mainstream pundits to place the tallies into proper context and consider how many people were dying everyday before COVID-19 arrived on the scene, many citizens are understandably afraid.
Fear always brings out the worst in groups of people, who may team up against what they all decry as the evil enemy. But fear, hatred and self-deception conjoined produce a toxic soup, and we need not search the annals of the first half of the twentieth century to find evidence of this. Post-9/11, violent crimes against Muslim people (and other brown-skinned persons sometimes mistaken for “Arabs”) were on the rise. We are currently on a trajectory leading to a place where those who read the spec sheets for the “free” vaccines and then, based on that information, decline to roll up their sleeves, will be denigrated as criminals. The divisions being concretized between those healthy, robust people who agree to COVID-19 vaccination and those who demur are being strengthened by virtue-signaling campaigns making everyone who gets the vaccine believe, again, amazingly enough, that they are morally superior to those who do not. Even Britain’s Queen Elizabeth has come out publicly to denounce those who decline to participate in the experimental vaccine trials as “selfish.”
Technocrats the world over have been warning since at least April 2020 that the only way out of our current predicament will be to issue “vaccine passports” through which the healthy can be distinguished from the unhealthy. However, even if the first and second round of vaccines together work to prevent transmission and infection—which has yet to be established—those who have received them will not be protected from the new variants, and will need to submit to a third round of so-called booster shots, which in another six months will likely “require” a fourth booster, and so on. All of this would seem to imply that the “vaccine passports” being floated by government and corporate leaders will in no way ensure that the persons carrying them are not going to contract or transmit the latest variants of the virus. So what do they really mean?
The idea that those who have accepted COVID-19 vaccines are “fit to fly,” and to work and to socialize, or even to go outside, rests on a truly Orwellian redefinition of “healthy” as “vaccinated,” even as scientists continue to warn that the virus has already transformed enough to check the already questionable efficacy of the current crop of vaccines. Those who support the implementation of vaccine passports are fond of pointing out that people traveling to Africa are required first to be vaccinated against Yellow Fever. But COVID-19 is nothing like Yellow Fever, which kills half of the people it infects. The vast majority of persons do not need to introduce foreign substances into their body in order to survive COVID-19. Because the vaccines appear to mitigate serious symptoms and increase the odds of survival among vulnerable persons, they should of course be offered the option of vaccination, but it must remain their choice, since they alone will bear the brunt of any untoward side effects, which invariably arise in a small portion of the population with every vaccine.
In the Nuremburg trials, nonconsensual human experimentation was decried and judged to be a crime against humanity. But extortion, too, is a form of coercion and we should not be fooled by the latest Newspeak press releases in which “authorities” attempt both to cajole and to threaten us for defying their will. Former UK Prime Minister (and confirmed war criminal) Tony Blair has determined that vaccine passports will be our ticket to freedom. This is a shocking pronouncement because our freedom is not his or anyone else’s to withhold from us, least of all when our own person and body are at stake. It’s as though we are currently inhabiting an episode of Black Mirror (Netflix), where the dark heart of pharma-technocratic rule is working to bend us to its will, using compliant citoyens as its unwitting tools. Peer pressure, shaming, bribes and threats are nothing new, but in this case the consequences could not be more personal.
History clearly demonstrates that one repressive measure leads to another, and totalitarianism creeps in step by step, unnoticed until it is too late. From the suppression of speech to the lockdown and quarantine of healthy people to coercing or extorting them to participate in experimental trials—none of this bodes well for the future of freedom. The fight to retain what are our rights—to speech, liberty, privacy, and the pursuit of happiness—and above all to not be treated as the possessions of government-funded corporations, must be defended while this is still possible. When a system is sufficiently infiltrated at every stratum by fanatics convinced of their own moral superiority and monopoly on the truth, then totalitarianism is near. It happened in Nazi Germany and it happened in Stalin’s Soviet Union. We are moving perilously close to that nightmarish reality right here and now as people redefine basic terms such as ‘sickness’ and ‘health’ and insist on exerting total control over information flow.
With the extremely rapid advances in technology made in the twenty-first century, many aspects of human life have transformed irrevocably. One of the most significant changes involves norms regarding the commission of intentional, premeditated homicide by governments. The practice is today termed “targeted killing,” but it differs only in the implement of death from what in centuries past was called “assassination” and deemed illegal. Black-ops involving shady assassins who stalk and eliminate perceived enemies under a cloak of secrecy are no doubt still carried out by governments. But the use of unmanned combat aerial vehicles (UCAV) or lethal drones to stalk and eliminate terrorist suspects in lands far away is openly acknowledged and has been largely accepted by politicians and the populace alike as one of the military’s standard operating procedures.
The use of lethal drones to kill rather than capture suspects began in Israel, but was taken up by the George W. Bush administration in the war on terror waged in response to the attacks of September 11, 2001. President Barack Obama then expanded the practice, electing essentially to eliminate the problem of longterm detention of suspects in facilities such as the prison at Guantánamo Bay by defining them as guilty until proven innocent and then dispatching them using missiles launched from drones. The suspects killed were classified posthumously as Enemy Killed in Action (EKIA) unless specific information demonstrating their innocence was brought to light. But since many of the killings took place in remote parts of the world, such as the Federally Administered Tribal Areas (FATA) of Pakistan, where there were few if any troops or intelligence analysts on the ground to do the sort of due diligence needed to establish the innocence or even the identity of the persons killed, this nearly never happened.
With the ascendance and spread of lethal drones, government officials have effectively permitted the current state of technology to dictate morality, rather than subjecting proposed tools to scrutiny before using them. This is most plausibly a result of the fact that the experts to whom politicians defer on these matters are invariably either military officers or persons with ties to military industry. Indeed, many military officers end up serving on the boards of weapons manufacturing and military logistics firms. The revolving door between government service and industry is evident in cases such as those of Dick Cheney, James Mattis and Lloyd Austin, all of whom served as secretary of defense and also sat on the boards of private military companies with sizable government contracts. From the perspective of military experts, whose focus is upon winning wars through maximizing lethality, the development of remotely piloted aircraft (RPA) has naturally been regarded as a boon, offering the possibility of combating the enemy without risking soldiers’ lives.
Yet in the development and spread of remote-control killing technology, important ethical considerations have been overlooked. First, during regular combat warfare, when troops are placed in dangerous situations, where “kill or be killed” becomes a prudential maxim for survival, many acts of killing can be construed as literal acts of self-defense. Whether or not the troops should be there in the first place, as in Iraq or Vietnam, is another matter altogether, but if a soldier is already in a perilous theater, with enemy combatants lurking around every corner, then the pretext of self-defense becomes reasonable. The same cannot be said for acts of killing perpetrated by soldiers sitting in trailers in Nevada, who are not being directly threatened by their targets.
U.S. combat soldiers on the ground in both Vietnam and Iraq killed many people who might have been insurgents but proved not to be. The veterans of those conflicts suffered enormously as a result, and many ended up permanently wrecked by the experience. Soldiers who use drones to target the enemy are far from the bloody fray and physically safe from the dangers of the “battlefield” on which they fire. Nonetheless, drone and laser sensor operators such as Brandon Bryant abandoned the profession after having become disillusioned with the disparity between what they had signed up to do (defend their country) and what they ended up doing, killing poor tribesmen living out in the middle of nowhere who were not threatening anyone with death at the time when their lives were abruptly ended.
Because drone operators follow and observe their victims for extended periods of time, and witness their anguish in the aftermath of strikes as they bleed out, they have been prone to suffer bouts of regret and develop post traumatic stress disorder (PTSD) despite never having been directly endangered themselves. Such reflective servicepersons furthermore recognize that collateral damage, said to be unavoidable in the “fog of war,” is truly excusable only in a life or death, do or die, dilemma. Up to now, what the drone and laser operators had to fall back on was the fact that they were not in a position to be able to assess the value of the intelligence used to select targets. Their job was to locate and kill the person(s) said to warrant elimination by officers higher up in the chain of command. Accordingly, when mistakes were made, the blame ultimately rested with the analysts who had built the case for targeting on the basis of evidence gathered by drones, obtained through paid informants, and mined from cellphones. In other words, even if the drone operators themselves regretted having killed persons whom they themselves did not believe deserved to die, based on their own observation of the targets, some among them were still able to assuage their conscience by invoking the tried-and-true “invincible ignorance” line, according to which soldiers are not to blame when negative consequences arise from their having executed what to all appearances were legal orders.
But surely intelligence analysts, too, may suffer regret when obviously (or even possibly) innocent people are destroyed on the basis of the analysts’ marshaling and interpretation of the available data. Why not, then, take the fallible human being out of the loop altogether, thus minimizing the possibility of error and the human vulnerability to emotions which sometimes culminates in PTSD? If it was better for soldiers in trailers in Nevada to kill thousands of terrorist suspects throughout the Global War on Terror, rather than having them fly dangerous combat missions, would it not be even better to relieve all parties involved of the burden of having killed?
Despite the moral dubiousness of killing “enemy soldiers” who are not directly threatening anyone with harm, and a fortiori in countries where there are no allied combat soldiers on the ground said to require force protection from above, remote-control killing technology continues to be refined and extended with the aim of making drones both more efficient and more lethal. Consequently, a world in which robots “decide” whom to kill, as in dystopic films of the twentieth century such as Terminator, Robocop and their sequels, is no longer the mere fantasy of writers of speculative fiction. Lethal Autonomous Weapons Systems, with the proper-sounding “LAWS” as its acronym, are currently being pursued as the best way both to keep soldiers off the battlefield and also to minimize the errors invariably committed by all-too-human operators in drone warfare. From a purely tactical perspective, an obvious benefit of LAWS is that with this new technology, which takes human beings “out of the loop,” when mistakes are made, there will be no operator who must bear the burden of knowing that he killed people who did not deserve, much less need, to die. Indeed, arguably the most significant benefit to the military in rolling out LAWS will be the elimination of PTSD among drone operators who deeply regret their participation in the serial, mass killing of persons who posed no direct threat to their killers when they were incinerated by missiles launched from drones.
With LAWS, the responsibility for mistakes made can be almost completely diffused, for computers will not only gather and analyze the data, but also select the targets on the basis of that data, and then launch the missiles themselves. The magnitude of the mistakes made will vary from case to case, but so long as human beings are involved in the construction and programming of the machines used to kill, then the potential for error will obviously remain. There may still be a bit of room left for soul searching among those who programmed the computers, but they will always be able to absolve themselves by pointing to the inherent limitations of data collection. Without perfect information, mistakes will continue to be made, but the lengthier the causal chain, the fewer individuals there will be who feel the need to shoulder any blame.
From a tactical perspective, all of this may sound very logical and clearly better than having soldiers risk their lives, and analysts and operators suffer psychological distress upon learning that they contributed to the carnage when innocent persons are erroneously destroyed. The first premise in the inexorable march toward Lethal Autonomous Weapons Systems, however, that the killing will happen, with or without human operators and analysts, needs to be subjected to scrutiny. What has propelled the mad rush to develop and implement LAWS is the false assumption that the killing ever needed to happen in the first place. The governing idea has been that because the persons being targeted have been determined to be potentially dangerous, they might undertake to threaten people at some future time, if they are permitted to live. In other words, the victims are being preemptively eliminated, following the reasoning used to promote the 2003 invasion of Iraq, when the warmakers claimed that Saddam Hussein posed a threat to the world because of his alleged possession of weapons of mass destruction (WMD). That pretext was of course later found to have been false, along with others, including the claim (obtained through torture) that the Iraqi dictator was somehow in cahoots with al Qaeda. Yet the war went on all the same, with some pundits and war supporters filling the justificatory void with the tried-and-true need to spread democracy.
In the maelstrom of the wars on Afghanistan and Iraq, assassination was simply rebranded as targeted killing, when in fact both practices involve the intentional, premeditated elimination of persons deemed potentially dangerous. This criterion is so vague as to permit the targeting of virtually any able-bodied person who happens to be located in a place where terrorists are suspected to be. The only differences between assassination and targeted killing are the nature of the weapon being used and the fact that soldiers wear uniforms, while undercover assassins and hitmen do not. But are these differences morally relevant?
Unfortunately, over the course of the more than twenty-year Global War on Terror, there has been no attempt to reckon with the facts. But if the war on Iraq was a violation of international law, then every person killed in the conflict was the victim of a crime. Because of the shock of the events of September 11, 2001, however, most of the people who pay for the military’s killing campaigns have gone about their business, allowing the government to use their tax dollars to kill people who had nothing to do with the terrorist attacks, and in many cases were protecting their own land from illegal invaders. Twenty years on, the military continues to kill people when and where it pleases under the pretext of the need to fend off the next terrorist attack. That millions of persons have been killed, maimed, widowed, orphaned, reduced to poverty and/or rendered refugees as a result of the ever-expanding missions of the U.S military in the Middle East and North Africa—most of which were caused by overspill of previous missions, beginning in Afghanistan and Iraq—has been largely ignored.
The “killing machine” has been on autopilot for some time now, in the sense that lists of targets continue to be drawn up and dispatched with the killers themselves writing the history of what transpired. The wars on Afghanistan and Iraq gave rise to new terrorist groups such as ISIS, which then spread to countries such as Pakistan, Yemen, Libya, Syria, Mali, and beyond. Subsequent interventions in those lands then led to the spread of factions throughout Africa, where drone bases have been erected in several countries to deal with the problem of radical Islamist terrorism. With LAWS, the perpetual motion targeting of unnamed persons can be expected to be revved up to run even faster, around the clock, for robotic killers suffer neither compunction nor fatigue, and the success of their missions will continue to be measured by the number of “dead terrorists”, who in fact are suspects. In other words, the ethical problem with LAWS will remain precisely the same as the ethical problem with the drone program through which human operators have pressed the buttons to launch the deadly missiles.
The debate over LAWS should not be over how to make robots act as human beings might. Rather, we must pause and back up to ask why anyone would ever have thought that this rebranding of assassination as the military practice of “targeted killing” should be permitted in the first place. The fallacy in thinking that lethal drones and LAWS “protect the troops” derives from the assumption that the people being killed would have been killed had this technology never been developed. The truth, however, is that the many drone bases now peppering the earth have served as a pretext for launching missile attacks which would otherwise never have occurred. With such tools at their disposal, military and political administrators are apt to use them without thinking through the moral implications of what they are doing, specifically ignoring the long-fought advances in criminal justice made over millennia, above all, the presumption of innocence upheld in free societies the world over.
Drones were originally deployed for surveillance purposes, but it did not take long before they were equipped with missiles to provide a dual-function machine capable of both collecting data and taking out enemy soldiers based on that data. Most of the individuals eliminated have not been identified by name, but in some cases specific persons have been hunted down and killed, as in President Barack Obama’s targeting of U.S. citizens Anwar al-Awlaki and Samir Kahn in Yemen in October 2011, and Prime Minister David Cameron’s killing of British nationals Reyaad Khan and Ruhul Amin in Syria in August 2015. More recently, on January 3, 2020, President Donald Trump targeted top Iranian commander Qasem Soleimani, who was located in Baghdad at the time. Trump openly avowed that the act of killing was intentional and premeditated. According to the president, the major general was responsible for past and future attacks against the United States. All of these eliminations of specific, named individuals would have been considered illegal acts of assassination in centuries past but are today accepted by many as “acts of war” for the simple reason that they are carried out by military drones rather than spies.
The ethical problems with lethal drones have been raised many times by activists, who have protested the killing of persons in countries such as Pakistan, with which the United States is not even at war, and also by successive U. N. Special Rapporteurs on Extrajudicial, Summary or Arbitrary Executions (Philip Alston, Christof Heyns, et al.), who have repeatedly cautioned that the extension of the right to kill anyone anywhere at the caprice of the killers, which has been assumed by the U.S. government in its wide-ranging drone-killing program, can only sabotage the prospects for democracy in lands where leaders opt to eliminate their political rivals, facilely denouncing them as “terrorists” while pointing to the precedent set by the United States, the United Kingdom, and Israel. Needless to say, the literal self-defense pretext does not hold when leaders choose to use remote-control technology to hunt down and assassinate specific persons rather than charging them with crimes and allowing them to be judged by a jury of their peers. But, just as in the case of unnamed targets, when the victims of drone strikes are identified by name, they are invariably labeled terrorists, with no provision of evidence for that claim.
With LAWS comes the specter of fully normalized political assassination with no territorial boundaries whatsoever. The question, then, is not “how do we devise the best algorithms with which to program robotic killers?” Instead, we must ask why homicide should be used in cases where the decision to kill is clearly not a last resort, as it never is in drone killing outside areas of active hostilities, because no human being will perish if the missile is not launched. In expanding the drone program, the Obama administration carried out many “signature strikes,” where the precise identity of the targets was not known but their behavior was said to be typical of known terrorists. In addition, cellphone SIM card data was used to identify persons who had been in contact with other persons already believed to be terrorists or found to have connections to known terrorist groups. To execute persons on the basis of such circumstantial evidence of the possibility of complicity in future terrorist acts is a stunning denial of the human rights of the suspects, and flies in the face of the democratic procedures forged over millennia precisely in order to protect individual persons from being killed at the caprice of those in positions of power. This drone killing procedure in fact exemplifies the very sort of tyranny which finally led Western people to abolish monarchic rule and establish republican constitutions protective of all citizens’ rights. As the mass collection of citizens’ data continues on, such moral concerns are more pressing than ever before, for political leaders may decide to use their trove of intelligence to eliminate not only citizen suspects located abroad, but also in the homeland.
What needs to be done is manifestly not to make machines more efficient and lethal killers. Instead, we need to revisit the first premises which were brushed aside in all of the excitement over the latest and greatest homicide technologies deployed in the twenty-first century, when the U.S. government was given free rein to pursue the perpetrators of the crimes of September 11, 2001. That license to kill with impunity was never revoked, and to this day the drone killing machine continues to be used to destroy people who had nothing whatsoever to do with what happened on that day. With the diffusion of responsibility inherent to LAWS, a truly dystopic future awaits, as the criteria for killing become ever more vague and moral responsibility is fully diffused.
There has been a lot of discussion and some action on the question whether statues portraying or representing men currently regarded as scoundrels by self-styled “good people” should be permitted to stand. On its face, such a view would seem to imply that many of the public squares and buildings of the great cities of the world must be razed, which strikes me as a reductio ad absurdum. Pick any leader you like: Churchill, Truman, De Gaulle, in any country, at any time, and look closely enough at his record and you will find dubious decisions made with deplorable consequences. The leaders who saved the world from the Nazis may be considered heroes today, but that does not imply that they were somehow flawless, as is nowhere more obvious than in what happened in the Soviet Union after World War II, when millions of Russians became the victims of a regime which had worked with the governments of the United Kingdom and the United States to halt Hitler’s mad quest to conquer the world.
Looking at more recent leaders, there are a good number of libraries, institutions, and buildings dedicated to men such as George H. W. Bush, George W. Bush, and Tony Blair, who together wrecked Iraq after concocting bogus pretexts for the invasion of a sovereign nation. For his part, Barack Obama attacked Libya before leaving it in shambles and dramatically increased the use of lethal drones to kill suspects abroad, including U.S. citizen persons of color who were executed without indictment or trial. Obama also dropped bombs throughout his two-term presidency (an average of seventy-two per day in 2016), targeting seven different countries across the Middle East and Africa. The military policies of each of these men have caused untold human misery, yet buildings and foundations continue to be named after them.
Those who wish to raze statues and rename buildings are for some reason not talking about their contemporaries, and the idea of prosecuting men such as Bush, Blair and Obama at the International Criminal Court (ICC) for crimes against humanity does not seem to cross their minds. Indeed, we find celebrities such as Ellen Degeneres and former first lady Michelle Obama entirely willing to overlook the war crimes of their buddy George W. Bush. President Obama himself opted not to prosecute those responsible for the Bush administration’s widely decried torture of human beings at Abu Ghraib, Baghram, and Guantánamo Bay prisons, among other places. Obama claimed, “That’s not who we are,” but effectively left torture as an option on the table for other administrations, including his own. He also “solved” the problem of the extended detention and mistreatment of terror suspects never charged with crimes by defining them as guilty until proven innocent and incinerating them with missiles launched from drones.
Remarkably, despite the horrors perpetrated under their watch, the esteemed opinions of George W. Bush, Tony Blair, and Barack Obama continue to be sought out. As far as I can tell, many people are entirely ignorant of the foreign policy record of Barack Obama, whose reputation seems to have received a big boost by the brash and boisterous demeanor of his successor. Mention Libya to a fan of Obama, and you are likely to receive a puzzled look in response. It was not, of course, Obama’s intention to catalyze a resurgence of black African slave markets in Libya through his ousting of Moammar Gaddafi in 2011, but that was nonetheless one of the consequences. When it comes to relatively mild-mannered men such as George W. Bush and Barack Obama, the prevailing prioritization of intentions over consequences translates smoothly into a willingness to forgive the perpetrators of catastrophic campaigns of mass homicide along the lines of the tried-and-true just war line: They meant to do good. So powerfully does the assumption of good intentions among compatriots hold sway over people that even Henry Kissinger, despite his role in perpetrating and perpetuating the Vietnam debacle, which resulted in millions of deaths, has managed somehow to continue to be revered, at least in some circles.
The same charitable interpretation is not, however, extended to the men whose effigies have been damaged or destroyed all over the United States in something of a mad frenzy to decry them as evil, while highlighting the protesters’ goodness by contrast—if only to themselves. Dozens of statues and monuments have been vandalized—spanning the time period from Christopher Colombus to Ronald Reagan—but the “cancel culture” crowd has focused especially on what have been interpreted to be the racist overtones of effigies of Confederate soldiers and officers from the Civil War, as a result of which slavery was finally abolished. As educated people know, the Civil War did not commence as a simple one-issue battle over whether slavery should be permitted, any more than the United States entered into the mêlée of World War II “in order to save the Jews.” (What went on at the concentration camps was discovered upon, not before, the liberation.) At the end of a conflict, when history is written by the victors, moral motivations are invariably emphasized over what were originally political reasons for taking up arms. In the case of the Civil War, economic objectives among secessionists and federalists, including President Abraham Lincoln, were what gave rise to the war. Nonetheless, the abolition of slavery is naturally viewed as a felicitous consequence of the loss of the war by the Confederate army.
I am not interested in debating the virtues and vices of the many men throughout history who held slaves, as did some of the founding fathers of the United States, but would like to suggest, rather, that calls for the destruction of statues and the metaphorical burning of texts, better known as censorship, are misguided. This is, first, because such works have always and everywhere been the result of intelligent human beings’ acts of creation. It is true that nearly no one knows anymore who created the vast majority of public squares and statues. In the case of structures erected in ancient and medieval times, it is plausible that they were produced under duress by persons enslaved, because that’s how things were done back then. But the fact that the Roman Colosseum was built with the blood, sweat and tears of slaves made to realize the vision of non-slaves (Emperors Vespasian and Titus) does not imply that the structure should be erased from the face of the earth. To do so would accomplish nothing beyond depriving the world of memory traces of centuries past.
What we know about the more recent structures being defaced is that their production involved the industry and creativity of artists who were not slaves. There is a reasonable sense in which the statues can be viewed as works of art rather than effigies to bad men who no longer exist or insane calls to make slavery legal again. The idea that destroying such works will somehow diminish racism rests on the entirely false view according to which an artifact has a single, definitive, immutable interpretation. This is a profound misunderstanding of the nature of art. A statue of a Confederate soldier can be as much a tribute to all soldiers who fight and lose as it is to any individual racist’s beliefs. The concept of “invincible ignorance”, that at least half of all soldiers are fighting for an unjust cause (which follows logically from the fact that at most one side can be right—though both can be wrong), has throughout history been invoked to protect the soldiers on the losing side who followed what to all appearances were legal orders in promoting a mistaken leader’s cause. Again, a fine-grained study of history reveals that many men who fought on the side of the Confederacy were not doing so out of hatred for African Americans and were not in fact slave owners. Moreover, there were racists among the Union army’s ranks, and some among them did own slaves. It is true that, had the Confederate army prevailed, then slavery would likely have lasted longer in North America than it did. It is also true, however, that many other countries managed to abolish slavery without such a bloody and protracted war.
When we allow one person or small group’s interpretation alone, the least charitable of all possible interpretations, to dictate the meaning of a work, we are allowing them to delimit reality precisely in the manner of a tyrant. Works of art, like written texts, are by their very nature subject to multiple interpretations, and their meaning to an individual comes finally to be a function of that person’s background beliefs. Non-didactic artists and writers may not even know what “message” they were attempting to convey when they created their works, and they often discover meanings only after the fact, while considering the object from the perspective of a reader/interpreter. The fact that slavery was widespread throughout the ancient world does not imply that all ancient texts should be erased and artifacts destroyed. The fact that Aristotle may have believed that women were only partial persons does not imply that we must interpret his every word as expressive of that idea. Nor should the fact that a few pseudo-intellectual Nazis took the work of philosopher Friedrich Nietzsche as supportive of their ridiculous Weltanschauung lead us to burn all of Nietzsche’s books.
In a truly liberal society, such as that championed by nineteenth-century English philosopher John Stuart Mill, according to whom the “marketplace of ideas” is a cornerstone of democracy, the possibility of learning and correcting the errors in our ways mandates that we remain receptive to other people’s points of view. Even if we find the views of others despicable, this does not imply that they should be effaced. Practically speaking, it would be highly imprudent to prevent everyone who disagrees from speaking, because then we would never know who our true enemies are. But the primary reason for opposing censorship is that no censor has a monopoly on the truth. We are all wallowing in beliefs forged through a random assortment of arbitrary interactions, processing information as we encounter it, accepting some claims while rejecting others, based, ultimately, on how they cohere with our current worldview.
In Ray Bradbury’s classic novel of speculative fiction, Fahrenheit 451, the government is being run by leaders who believe that they are right and anyone who disagrees is wrong, and that books which promote other, “dangerous” world views must be destroyed in order to prevent poisoning people’s minds. The most famous works of dystopic fiction, including George Orwell’s 1984 and Aldous Huxley’s Brave New World, also caution against the perils of permitting small committees of fallible people to delimit the contours of reality as they please. Reading such works, in which censorship is a key component of governance, it may seem obvious that the worlds depicted are anti-humanist. These are worlds where intelligent persons are treated as criminals for having ideas which differ from what has been deemed the proper way of viewing things. Despite the popularity of such works of fiction in recent times, an ever-increasing number of people have come to believe that we should topple the statues of “bad men” and remove classic works of literature from school curricula. The next step down a slippery slope should have been predicted when all of this began: some people should not be permitted to speak in the public square, for their words may be dangerous.
The recent removal of President Donald Trump from both Twitter and Facebook, on the grounds that he allegedly used the platforms to foment violence, specifically, the raucous protest at the U.S. Capitol on January 6, 2021, has been applauded by self-proclaimed “liberals” all over the planet, despite the fact that Trump explicitly dissociated himself from those who wreaked the violence, and publicly denounced it as wrong. Note that no one held Democratic party leaders or candidates responsible for the many violent protests and widespread looting in cities all over the United States throughout 2020. Yet Michelle Obama, who in befriending George W. Bush let war crime bygones be bygones, wrote a two-page letter exhorting the Big Tech social media giants to permanently bar “infantile” Trump from sharing his words with the world, claiming that he was responsible for what happened on January 6. Her sentiments were echoed and amplified throughout the mainstream media, with many people parroting the refrain that it is the prerogative of private companies to conduct their businesses as they please, and offering a pithy response to those who disagree: “If you don’t like it, leave.”
It is true that the Big Tech companies are not bound by the First Amendment to the United States Constitution, which protects the free speech of citizens. Twitter and Facebook have been censoring, “curating” content, and banning users for years now. In recent times, and most aggressively during the 2020 election cycle, they have adopted the measure of attaching “child safety” warnings to posts, alerting users that content is dubious according to the company’s fact checkers. What happened in this case, however, was startling because one of the alternatives to Twitter, Parler, where people banned from Twitter or annoyed by their censorship or “child safety” warnings sometimes migrated, was removed nearly immediately from the Google , Amazon and Apple app stores, effectively suppressing the speech of everyone who did not like Twitter and left (as they had been told to do). The symbiosis between the government and Big Tech became undeniable when incoming President Biden tapped Facebook executives to be a part of his cabinet, and one hopes that lawsuits charging monopolization will put an end to the squelching of alternative views and ensure that history, though continuously rewritten by the victors, will not be entirely erased.
Donald Trump’s tweets were certainly a bizarre phenomenon in U.S. political history and surely one of the reasons for his polarizing presence the world over, with people revering and reviling him in close to equal numbers—at least judging by the outcome of the 2020 presidential election. For Big Tech to effectively uphold Hillary Clinton’s “deplorable” trope in favoring one half of the country over the other can only exacerbate the deep divisions already on display. Michelle Obama is entitled to her opinion, but so are the more than 74 million people who voted for Trump in the 2020 election. Donald Trump was the president of the United States for four years, and his often emotive Tweets are historical texts, for better or for worse, which should be accessible to all.
Unfortunately, the so-called liberal people who demand and applaud the silencing of Trump do not appear to understand what they are advocating. They do not appreciate that humankind has progressed over millennia as a result of a lengthy experiment in testing out various ways of looking at the world. To claim, for example, that the works of Mark Twain should be censored because they contain the word ‘nigger’ is to forget that societies have changed in part as a result of people’s having read such books. It is in fact arguable that to remove a statue of Robert E. Lee would do no more than to hasten the process of forgetting how wrong human beings were, for most of history, in upholding slavery, not only in the United States, but all over the world. Likewise, from an antiwar perspective, every monument to World War I (and they exist in many different countries) stands as a testament to the sheer insanity of sending millions of young men to their deaths for essentially nothing. Again, one look at the Vietnam Veterans Memorial in Washington, D.C., where the names of the many men who died in a misguided war are inscribed, serves as a reminder not to make the same mistake all over again.
To oppose the razing of historical structures on the grounds that they were commissioned by or represent “bad” people is not to deny that communities have the right to decide how to decorate their public spaces. Sometimes sculptures in parks and squares are removed or replaced, but to pretend that unilateral acts of vandalism will reduce racial tensions is delusive in the extreme. It is both puerile and false to assume that a single interpretation exhausts the value of any work of art or text. The truth is not a function of whatever arbitrary group of people happen to have acceded the corridors of power or whatever angry protesters have taken it upon themselves to wreck what they happen not to like, based on their own, necessarily limited, interpretations. Rational adults are aware that all people are fallible and that no one can pass strict “purity” tests.
Should the Roman Colosseum, which was not only built by slaves but also used as an arena for gladiator battles between slaves forced to fight to the death for the entertainment of the upper classes, be pulverized, and the land on which it stands be turned into a parking lot? That is the direction in which this childish movement is going. To cancel culture is to cancel the extant evidence of the process human beings have gone through to get where they are today. The Nazis attempted such a cancel culture project, denouncing modern forms of art as “degenerate” and permitting only aesthetic views in conformity with their delusive project of furthering the Aryan race. Some may take offense at my comparison of these two cases, but they differ only in details, not in approach. Both define cultural history as having effectively ended with the current view of those who would erase the past and dictate all that may exist in the future. Both are obtuse, shortsighted and small-minded.
Despite their foibles and deficiencies and myopia and biases, some human beings nonetheless make the effort to contribute their intellectual energy to produce new works and texts for our consideration. We need not and will not like all of them, but to say that we should destroy them is tantamount to the tyrant’s decree “Off with their heads!” in response to the annoying dissenters who may emerge among his ranks. The censors tomorrow may not agree with your interpretations and may decide that you need to be canceled at their caprice. The grandest irony of all is that if such a view had prevailed in the pre-abolition United States, with the suppression of the texts and speech of all those who disagreed with the laws of the land at that time, then slavery would never have been abolished.
Imagine a world where you were required to cover your face and nose whenever you stepped out of your home or when anyone came to your door. No one would know when you smiled or frowned and the difficulty of communicating the ideas you attempted to share would eventually deter you from saying much of anything at all, frustrated as you would be by the annoyance of always having to repeat yourself. There would be no point in asking anyone questions requiring more than a “yes” or “no” answer, because more complicated replies would be muffled by their masks and not worth the time and effort needed to decipher.
Imagine a world where all inhabitants of a city, state or country were told where they could go and what they could do, not only in public, but also in their homes and in privately owned businesses. Healthy persons would be quarantined to prevent other people from becoming ill. Good citizens would be enlisted and surveillance and tracking apps used to identify anyone who refused to abide by emergency lockdown and curfew orders or the required hygiene measures. Small business owners would be fined for attempting to run their businesses or neglecting to enforce emergency laws. Employees would be arrested for attempting to go to work.
Imagine a world where private tech companies collaborated with government bureaucrats to censor your written speech. You would not be permitted to share texts which conflicted with the official story of whatever the authorities claimed that they had done and were doing and wanted you to believe. You could still write texts, on your own computer, but there would be nearly no one around to read what you had to say. The censors could not suppress texts faster than they could be written and shared, however, so some would slip through. This would necessitate visits from the state police to the homes of those who had attempted to incite violations of any emergency laws which happen to have been enacted by administrators to protect their constituents. Whether or not the measures actually helped anyone would be entirely beside the point, because everyone knows (from all of the “just wars” throughout history) that all that matter are the lawmakers’ publicly professed intentions to do good. The perpetrators of what were deemed dangerous texts would be arrested and taken away, if necessary, by force.
Imagine a world where journalists were required to promote the official government line in order to keep their jobs. No text or report which reflected poorly on the military-industrial-congressional-media-academic-pharmaceutical-logistics-banking complex would be allowed. A few of those who “indulgently” refused to comply might then impudently begin their own publications, such as The Intercept, issuing their interpretation of what was going on in the few sequestered places (made difficult to find by Google) where independent journalism was still possible.
Imagine a world where the independent media, too, had been infiltrated by persons keen to hold the line, to defend what they had been persuaded to believe (by hook or by crook, carrots or sticks) must be upheld as the truth. Anyone who attempted to share inconvenient “disinformation” or “fake news”, as it would be denounced, would then have their work edited to conform with “the official story”. The “traitors” (as they would be characterized) who disagreed would have two choices: either to stop writing or to flee to another place with even fewer readers than before, such as Substack.
Imagine a world where publishers who revealed crimes committed by governments would be subject to criminalization: arrest, incarceration, isolation, extradition and more. Those who exposed murderous crimes would themselves be treated as though they were violent criminals, even when they had never in their lives wielded any implement of dissent beyond a pen.
Imagine a world where oppressive lockdown and curfew policies were said to be necessitated by case surges of “infections” in persons many of whom, while testing positive, manifested no symptoms at all. Suppose that the tests being used were revealed to be notoriously inaccurate, by some estimates, 90% inaccurate. Yet the testing continued on, ever faster and more furiously, and the case surges would serve as the basis for preventing healthy people from living their lives. When vaccines emerged, everyone who tested positive before but survived would still need to be inoculated, because, the “Listen to the Science” crowd would insist, it might be possible to become reinfected. People who had already survived the dreaded disease would only know that they were safe and not a menace to public health if they took the new vaccines, whatever they were, and whether or not they had been demonstrated to prevent and transmit infection, and no matter what the unknown side effects might be. Because, obviously: Science.
Imagine a world where people with life-threatening diseases were required to postpone their treatment because another disease, 95% of whose victims were octogenarians or older, had been designated by select “expert” epidemiologists as more dangerous and life-threatening than cancer, heart disease, stroke, and the other top killers of human beings. Imagine a world where distraught and desperate people reduced to poverty and rendered homeless through not being permitted to work began turning to deadly drugs such as Heroin, sometimes Fentanyl-laced, with the result that, in some cities (such as San Francisco), more persons died of overdoses than of the disease serving as the pretext for the laws forbidding those people from working.
Imagine a world where citizens were required to undergo medical treatments not known to prevent disease but believed to alleviate the symptoms of a disease for which the vast majority of humanity suffer only minor symptoms. This would be undertaken in the name of public health, but the effect would be to harm some of those who were not vulnerable to the disease and essentially had been tricked or coerced (since uninformed “consent” is not really consent) into volunteering as subjects in an enormous experimental trial with the aim of determining the outcome of introducing into human bodies certain foreign substances deemed potentially profitable by the companies which produced them. Most people would line up enthusiastically for such vaccines on the basis of widely disseminated claims of 90% and 95% efficacy lauded by well-respected experts, with details about the “known unknowns” and “unknown unknowns” available only in the fine-print of a few more nuanced articles which nearly no one read.
Imagine a world where people who had already survived the dreaded disease and also had been vaccinated were nonetheless required to abide by all of the ongoing hygiene measures, from wearing a mask, to staying home, to taking more vaccines on a schedule determined by their government. There would be no need to explain what any of this was for because all good citizens would already be accustomed to reciting the mumbled refrains (behind their masks): “Extraordinary times call for extraordinary measures!” and “We’re all in this together!” Everyone would have to comply, everyone would need to be quiet, everyone would be required by law to roll up their sleeves for the clearly compelling reason that a global pandemic was tearing through the world like a tsunami, wiping out everyone in its path. Except that most of the victims were dying at the same rate and age as the actuary tables would have predicted even if the culprit virus had never arrived on the scene. And the death toll over the course of the year would be about the same as for any other year, but with a slightly different distribution in causes of death.
Imagine a world where you were required to present your health record on demand and you would not be permitted to enter stores, restaurants, schools, to work or to travel without first proving that you had agreed to participate in an experimental vaccine trial for a disease from which you were at minimal risk of harm. The local health authorities would determine when you needed to present yourself again, for a new treatment, as the virus in question could morph unpredictably over short intervals of time into something else, thus necessitating that you and everyone else on the planet prepare your bodies once again, just in case this time around it might be more dangerous to you and those around you.
Imagine a world where a healthy person’s refusal to undergo medical treatment for a potential future possible disease to which he was not vulnerable, according to all available statistical indicators, was taken as proof of his suffering from another disease, Oppositional Defiant Disorder (ODD), as clearly indicated in the latest edition of the Diagnostic Statistical Manual of Mental Disorders (DSM). The person thus diagnosed would be required by law to submit to whatever medication would make him more amenable to the other forms of medical treatment to which he was opposed because obviously there would be something very wrong with him, constituting as he would a grave danger to public health.
Imagine a world in which children were taught from an early age that it was unsafe to touch other human beings or to be touched by them. They would be required to wear masks and full-face plastic shields and to wash their hands frequently and to attend school by video conference because, they would be sternly instructed, otherwise they might kill somebody else’s parents or grandparents, even though they themselves were not ill. If the children found any of this a source of anxiety, they would be prescribed psychiatric medications to transform their view of the world, so that they would accept rather than reject what they were told were “the new normal” contours of reality.
Imagine that everyone around you embraced all of the above and undertook public shaming campaigns against anyone who disagreed. Their faces would turn red and they would shriek in righteous indignation, “Listen to The Science!” whenever anyone attempted to point out the manifest absurdity of what was going on. They would denounce as degenerate, anti-science, anti-vax ignoramuses anyone who pointed out that pharmaceutical firms are profit-driven, publicly traded companies whose success depends on their ability to develop, produce, and market new wares.