‘Serve and Protect’? Eighty Percent of Criminal Charges Are for Misdemeanors

‘Serve and Protect’? Eighty Percent of Criminal Charges Are for Misdemeanors

A recent meeting by a North Carolina state government task force underscored that the mission today of American police forces may well be less to “serve and protect” and more to “harass and extract.”

“Of North Carolina’s 1.9 million criminal charges, 1.6 million of those are misdemeanors,” reported the N.C Insider (subscription required). This statistic was revealed by Jessica Smith, a professor of public law and government at the UNC School of Government, to members of the N.C. Task Force on Racial Equity in an August 20 meeting.

Smith told the work group that only 6.7% of those misdemeanors were considered violent. “I would say that the justice system is largely a non-violent misdemeanor system,” Smith added.

According to the news account of the task force meeting, Smith said that “the majority of the nonviolent misdemeanor charges are traffic, including speeding, driving with a revoked license, expired registration or not having an operator’s license.” Moreover, the article continued, Smith noted that “outside traffic violations, the most charged misdemeanors are larceny, possession of drug paraphernalia, possessions of a half-ounce of marijuana and possession of marijuana paraphernalia.”

Clogging up the state’s court systems are cases of minor victimless offenses, according to Smith.

Smith pointed out some of the most absurd misdemeanors consuming the state court system’s time. These included “not having a city dog tag, leash law violations or having tinted windows,” according to the news report.

North Carolina’s trends mirror the national data.

In this 2019 Equal Justice Initiative article, former federal public defender and legal scholar Alexandra Natapoff “estimates that misdemeanors comprise approximately 80 percent of all arrests and 80 percent of state dockets, based on arrest data from the FBI and other statistical reports.”

Natapoff concludes from her research that, “Misdemeanors are moneymakers for local jurisdictions,” adding that “Because they fund courts, probation offices, public defender and prosecutor offices, and even the general budget in some jurisdictions…misdemeanors function as a regressive tax policy that shifts costs for basic services to the poorest citizens.”

Legislators create more and more violations, making it virtually impossible for the average citizen to make it through the day without violating one of them. This is on top of the laws, like drug possession, that prohibit “unapproved” behavior in which there is no actual victim. The criminal justice system has been turned into more of a cash cow extracting fines and penalties from peaceful citizens than an institution protecting its citizens from the aggression of others.

Overcriminalization has led to overpolicing. It’s become so ludicrous, that according to a 2019 report by the Vera Institute of Justice, an arrest is made every 3 seconds in America.

The report notes that “fewer than 5 percent” of the arrests are for serious violent crimes, and furthermore “the authors of the study suggested that arresting large numbers of people for minor offenses for nonviolent or comparatively minor offenses can effectively undermine the trust and legitimacy that effective law enforcement requires.”

The mass levels of arrests and police interactions with citizens amazingly come at a time when violent crime has been decreasing. This 2019 Reason article noted that the violent crime rate fell another 3.3 percent from 2017 to 2018, after a “reduction of violent crime by roughly half since 1993.”

Because of the rising trend of overcriminalization, Reason reported that “about 6.4 percent of Americans born before 1949 have been arrested, compared to about 23 percent of those born between 1979 and 1988.”

Unsurprisingly, Reason noted that “Drug arrests have grown increasingly common, now representing 9 percent of arrests for men and 8 percent for women,” and further that “11 percent of arrests of women and 16 percent of those of men are for underage drinking.”

Being arrested for even such petty, non-violent transgressions can cause long-lasting damage to the lives of those being charged. Stiff fines can put low-income people in to debt that takes years to climb out of, and adding a misdemeanor to one’s record can create significant barriers to employment.

And of course, having so many interactions between citizens and police increases the odds of more interactions turning violent or deadly.

We are taught in elementary school that our government exists to secure our rights to “life, liberty, and pursuit of happiness.” Police are to be deployed as a means to protect us from those that would violate such rights.

Sadly, we are way beyond that point. Legislators create countless laws to restrict or mandate behaviors having nothing to do with protecting our basic rights. Police are dispatched to enforce these rules, making criminals out of peaceful people who never aggressed against anyone.

The criminal justice system has turned into a money-making machine, punishing millions of victimless misdemeanors to collect fines to pay the people running and enforcing the system. Like everything else it touches, the state has turned the criminal justice system into a means to enrich itself at citizens’ expense.

Bradley Thomas is creator of the website Erasethestate.com and author of the book “Tweeting Liberty: Libertarian Tweets to Smash Statists and Socialists.” He is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter @erasestate.

Debunking Marx’s ‘Iron Law of Wages’

Debunking Marx’s ‘Iron Law of Wages’

Does a competitive, free market capitalist system drive down wages for the common man?

That’s the question I was confronted with in a recent exchange I had with a Marxist on Twitter.

My original post stated that “Free, competitive markets don’t drive down worker wages, as Marx argued.”

“Instead,” the post continued, “markets drive up wages because entrepreneurs must bid against one another to acquire and retain the workers they need.” This led me to my ultimate point that “government intervention that limits competition will repress wages.”

The take home point, of course, was to illustrate that when the state interferes with the market it harms the working class; in contrast to the excuses of statist interventionists who insist their interventions are designed to help the common man.

That conclusion was unacceptable for our friendly social media Marxist, however. He replied with the following comment:

This is a common objection raised by Marxists, and a pillar of Marx’s economic analysis. Marx claimed that competitive, capitalist market economies would drive down wages of workers for the reasons stated above. Namely, that the labor force outnumbers the amount of available jobs, putting workers at a grave disadvantage and forcing them to accept ever dwindling wages if they are to be hired. Employers have no reason to pay anything more than “starvation wages” because the supply of willing and desperate workers far outstrips the supply of jobs.

Employers can drop wages as far as they want and still find willing takers, according to the theory, because if one person refused to work for such little pay, there will be a significant pool of desperately unemployed people willing to accept the crumbs. This tendency for competitive capitalist markets to drive wages down to bare subsistence levels is often referred to as the “Iron Law of Wages.”

This is an argument still relevant to policy discussions today, so it’s important to address why this argument is wrong.

What Determines Wages?

For clarity, it is important to first gain an understanding of what determines wages. A highly useful insight can be gained from political scientist David Osterfeld’s essay in the 1993 book  Requiem for Marx, edited by Yuri Maltsev:

Wage rates on the unhampered market do not depend on the individual worker’s ‘productivity’ but on the marginal productivity of labor. And the marginal productivity of labor is a product of savings, on the one hand, which creates additional capital and, on the other, entrepreneurial activity which directs this additional capital into the production of those goods and services most urgently desired by consumers.

The marginal productivity of labor, in short, is the value added to the production process of the next additional worker added to that process. And, as Osterfeld notes, the value added of that next worker will be determined largely by the amount and type of capital goods, i.e. machines, tools, technology, provided by the entrepreneur to aid the worker.

More specifically, wages will tend toward the present value of the marginal productivity of the worker. Say, for instance, an hour’s work adds $20 toward the value of a finished product that will likely be sold one year from now. That worker’s wages today will tend toward the estimated present value of $20 one year from now.

Importantly, the marginal productivity of labor in one line of industry heavily influences wage rates in other industries. In an unhampered, competitive market labor will flow to the highest wages made available for which they are qualified.

This means, as Ludwig von Mises pointed out in his 1956 book The Anti-Capitalist Mentality, that improvements in the marginal productivity of labor in some lines of industry that drive up wages will spur wage increases in seemingly unrelated industries.

Mises noted that there are many jobs in which the productivity of the worker has remained unchanged for centuries. For the barber, the butler, and basic agriculture work, for instance, worker output has remained roughly the same for generations, but wages have nevertheless risen.

Mises observed, “the wage rates earned by such workers are today much higher than they were in the past. They are higher because they are determined by the marginal productivity of labor. The employer of a butler withoholds this man from employment in a factory and must therefore pay the equivalent of the increase in output which the additional employment of one man in a factory would bring about.”

In short, because marginal productivity in other lines of work have increased and driven up those wages, employers must increase their pay in order to keep worker from seeking the alternative lines of work that pay more.  As the opportunity costs to workers rise, i.e. the wages they are passing up to stay employed in their current job, so too will their actual wages.

As a result, the competitive market has a tendency toward driving up wages, even for lower-skilled sectors of the labor market.

Lump of Labor Fallacy

A fallacy many fall prey to when believing that the wages of the average worker will be driven down to subsistence levels is the “lump of labor fallacy.” This fallacy is based on the faulty premise that the number of jobs in the economy is fixed.

Indeed, viewing the labor market through the lens of supply and demand but limited by the lump of labor fallacy may lend credence to the “iron law of wages” theory on its surface. Outside of those with skills highly demanded by the marketplace, their argument goes, the majority of average Joes and Janes would be forced to accept low and dwindling wages because jobs are more scarce than workers.

Of course, in a competitive market entrepreneurs are constantly looking to either expand or start new ventures, creating new job opportunities. Employers must compete with each other for the needed labor for their endeavors.

As Osterfeld observed, “Wages rise because, in order to take advantage of new profit opportunities provided by additional capital, entrepreneurs must bid workers away from their current positions.”

What those limited by the lump of labor fallacy further overlook is that demand for labor is heightened not only by business expansion and new market entrants, but also by the threat of new entrants in the market who may come along and bid away workers.

Reality Shows That Wages Continue to Rise

It would of course be naïve to claim that employers increase wages out of the goodness of their hearts. As Mises observed in his book Human Action, “Each entrepreneur is eager to buy all the kinds of specific labor he needs for the realization of his plans at the cheapest price.”

These wages, however, “must be high enough to take the workers away from competing entrepreneurs,” he added. In other words, the competition of an unhampered market more than offsets the employers’ desires to drive wages lower.

As evidence, consider that even with an extremely hampered economy in the U.S., wages for the average worker have steadily risen.

According to this recent Bureau of Labor Statistics report, employer costs per hour (wages plus benefits) for average compensation for all private industry workers increased from 2004 to 2020, a time plagued with one of our nation’s harshest recession, from $23.29/hr to $35.34/hr (see pg. 191 of report, figures taken from March of each year). The rise marked an increase of 51.7%, well ahead of the inflation rate of 37.2% during that time.

Clearly, the ‘iron law of wages’ is being broken.

Minimum Wage

If my Marxist critic were correct, and the average worker has such little bargaining power, then wouldn’t a significant share of workers be working for the minimum wage?

As Marx wrote in the Communist Manifesto, “The average price of wage labor is the minimum wage, i.e. that quantum of means subsistence which is absolutely required to keep the laborer in bare existence as laborer.”

Today, instead of a measure of “subsistence” to which average wages would fall, there is a government-mandated legal minimum wage.

Aside from those with highly valued skills who will see their wages bid up, the majority of workers will have to compete for the remaining jobs. Thus, according the Marxian theory, there’d be no reason for employers to pay any more than the legally-mandated minimum for a vast number of jobs, because the number of those competing for such jobs would far outstrip the jobs available.

Of course, reality once again serves to dispel this faulty notion. This 2019 Bureau of Labor Statistics report shows that just 2.1 percent of hourly workers above age 16 earn “wages at or below the federal minimum.” Of those, 47 percent were ages 16 to 24. So indeed it’s only a very small fraction of workers compelled to work for the minimum wage, and of those nearly half are young people probably in their first job.

Conclusion

One of the key criticisms of free market capitalism is that workers have little to no bargaining power, because the supply of labor exceeds the demand. As such, employers can leverage this advantage to steadily grind wages down to a bare subsistence minimum.

This theory, however, suffers from some major shortcomings. First is the fact that workers have opportunity costs, and employers need to pay wages sufficient to keep them from seeking work in alternative fields. So the increase in marginal productivity even in only select industries can still provide overall upward pressure on wages.

Secondly, the argument suffers from the lump of labor fallacy, which falsely assumes the number of jobs in the economy is fixed. Expansion of current businesses and new business startups is a regular feature of a market economy, however, which grows the number of jobs available.

Contrary to Marx and modern-day Leftists, it is not competition of the market that drives down wages, but restrictions on competition. For instance, barriers to entry for new entrants and other market interference that protects incumbent firms from competition would make it easier for them to pay lower wages.

An unhampered, competitive market economy is the working man’s best friend; and government interference their enemy.

Bradley Thomas is creator of the website Erasethestate.com and author of the book “Tweeting Liberty: Libertarian Tweets to Smash Statists and Socialists.” He is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter: @erasestate.

Reboot Government? No, Dismantle It

Reboot Government? No, Dismantle It

Imagine my intrigue at seeing this subtitle kicking off a recent op-ed in USA Today:

Why do Americans hate Washington? One reason is that it makes us feel powerless.

Americans hating Washington? An article exploring how the state makes its citizens feel powerless?

Sign me up.

It didn’t take long, however, for me to be disappointed.

The article started out promising, by citing “discontent” toward government coming from “both sides.”

“A survey in 2018, for example, found that almost two-thirds of Americans favored ‘very major reform’ of government, almost double from 20 years ago,” the article noted.

“Political leaders sow division” it continued.

Bingo.

The article next itemized some obstacles to reforming some of society’s pressing problems. Police unions stand in the way of firing bad cops. Red tape gummed up the response to the spread of coronavirus.

Disappointingly, however, the article quickly squandered an opportunity to educate readers how a significant reduction in state power and influence would be the best recipe to heal much of society’s division. Instead, it hits readers with this line:

“Americans need to feel that government can make things better.”

This turn for the worse should have come as no surprise, given the author of the piece, Philip K. Howard, is the head of an organization called “Campaign for the Common Good.”

Any group or person claiming to be working toward the “common good” immediately should raise a red flag for libertarians. Of course, we know there is no such thing as the “common good,” but rather an extremely diverse set of individuals with varying wants and differing plans on how to achieve happiness.

So, how do we achieve this common good and overcome citizen’s sense of powerlessness, according to Mr. Howard?

“Let people take responsibility again. Give officials and citizens alike goals and guiding principles, and then let other people hold them accountable,” he recommends.

Who should “give” officials and citizens their goals and guiding principles goes unanswered.

Moreover, Howard states that overcoming powerlessness involves “letting” government officials do their job, including suggestions such as “Let local public health officials respond immediately to the pandemic,” “(L)et designated officials issue infrastructure permits after reasonable review,” and “(L)et teachers maintain order in the classroom.”

Such suggestions may be great for government officials, but doesn’t do much for citizens. Indeed, his suggestions further entrench the state as problem-solver for society, removing opportunities for free citizens to responsibly solve problems through voluntary cooperation.

To his credit, Howard criticizes the government’s overly-complex law books that allow for little discretion on the part of public officials or citizens, and calls for a process of simplifying and stripping them down.

He supports this notion, however, in part because it will “reactivate(s) our link to government.”

The last thing we need is a greater link to the oppressive and divisive leviathan government.

“Governing isn’t this hard,” Howard assures us. “America needs a new public operating system that re-empowers people with responsibility to deal sensibly with the situation before them.”

But a “new public operating system” will still carry with it the immoral baggage of the old one. It will be funded by taxes stolen from citizens. Its decrees will still be enforced by threats of force by an organization with a monopoly on violence.

Howard is focused largely on making government more efficient in carrying out its functions, and uninterested in limiting its size and scope. This won’t reduce the division that he expressed concern over. The state sows division because if forces some to fund others through welfare and wealth redistribution schemes, and it compels people with vastly different preferences to live under the same arbitrary rules having nothing to do with protecting people’s person and property.

Amazingly, Howard concludes with the statement, “The best cure for alienation is ownership.”

But that starts with self-ownership, not reducing red tape to allow government agents to more swiftly enforce their decrees or spend stolen taxpayer money.

Citizens feel powerless because the state is empowered to initiate force against them, with no repercussions. Powerlessness is not felt because government contractors are slowed from starting their tax-funded projects due to red tape.

Mr. Howard largely gets the diagnosis right. More people are getting frustrated with government and recognizing it as a source of division. Disappointingly, he gets the cure wrong.

Instead of a “reboot” of government, society needs a radical rollback of its power.

Bradley Thomas is creator of the website Erasethestate.com and author of the book “Tweeting Liberty: Libertarian Tweets to Smash Statists and Socialists.” He is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter: @erasestate.

What Paul Krugman Gets Wrong About The $600 Unemployment Bonus

What Paul Krugman Gets Wrong About The $600 Unemployment Bonus

The federal government’s program of supplemental unemployment benefits of up to $600 per week, as provided for in the CARES Act, is set to expire at the end of July.

Whether or not to extend this program is setting up to become a contentious political battle mere months before this fall’s national election.

But what of the economic debate?

Keynesians like Paul Krugman who support the extension of the benefits focus on getting money in the hands of people most likely to spend it—boosting ‘aggregate demand.’

On Twitter, Krugman insisted the economic shutdown was “annoying but sustainable,” and added there are “no financial constraints” on government borrowing money to plug holes in the safety net, presumably including a continuation of the supplemental unemployment benefits.

Krugman Tweet

To Krugman, a significant and extended period of diminished production (due to the shutdown) is sustainable via enhanced government benefits to maintain sufficient levels of consumer demand.

But as economist Per Bylund quickly noted, “You cannot eat money. And you cannot buy what isn’t being produced. Production precedes consumption.”

Byland Tweet

Indeed, the economic argument for the perceived benefits of extending supplemental unemployment assistance to stimulate aggregate demand stands on very shaky ground.

Even granting the assumption that the unemployed will spend all or most of the supplemental benefits on consumer goods, consumption unbacked by previous production merely represents capital consumption.

To illustrate, take the example of the food and agriculture industries. Let us assume that the unemployed spend their enhanced benefits on groceries. For further sake of simplicity, let’s assume all the groceries come from agriculture.

But where would the money come from that’s dispensed to the unemployed?

The financing of the enhanced unemployment benefits, as encouraged by Krugman, would come from funds borrowed by the government. The money lent to the government would necessarily come out of the economy’s pool of savings. The unemployment benefits, therefore, represent a shift in resources from savings to consumption.

In this case, this shift in wealth from savers to consumers means less saving available to finance farmers’ investment in capital equipment like tractors and irrigations systems. As productive capital goods wear out, capital consumption ensues. The farmers’ productive capacity is diminished. Now multiply this agriculture example across the entire economy.

Sustainable employment and economic growth relies on steady investment in capital goods. By directly financing consumer spending via borrowed funds, the government is financing the bidding away of scarce resources from the capital goods sector and in turn funding the consumption of capital.

As John Chamberlain, the late economic historian stated, “There is no political alchemy which can transmute diminished production into increased consumption.”

Without the productive capacity to meet increases in consumer demand, price inflation results as more consumer dollars are chasing an output of finished goods that can’t keep up. Rapidly rising prices in household staples, and a diminishing stock of capital goods slowing down output is not “sustainable,” contrary to what Krugman would like you to believe.

And what about when the timing is right to fully reopen the economy, end the supplemental unemployment benefits and try to get people back to work?

Unfortunately, because of the capital consumption encouraged by Krugman, recovery will be severely hampered and jobs hard to come by.

A diminished stock of worn out capital good is not the foundation upon which economic recovery is built. And the savings needed to replenish and expand the economy’s structure of production will have been diminished by the massive shift from savings to consumption by virtue of the government’s supplemental benefits.

One may support extending the supplemental unemployment benefits on the grounds of providing temporary aid to those impacted by the shutdown. But economic arguments presented by the likes of Krugman claiming a prolonged shutdown and indefinite extension of benefits are sustainable because there are “no financial constraints” on the government are pure nonsense.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter: @erasestate.

Exposing Jerome Powell’s Lies About the Fed and Inequality

Exposing Jerome Powell’s Lies About the Fed and Inequality

If the heads of the Federal Reserve are to be believed, Fed policies do not make wealth inequality worse.

When asked recently if the Fed’s policies widen inequality, San Francisco Federal Reserve President Mary Daly stated without reservation: “Not in my judgment.”

Previously, Fed Chairman Jay Powell at the end of May was less forceful in his response, but nevertheless danced around the question of Fed policy increasing inequality. “Everything we do is focused on creating an environment in which those people will have their best chance to keep their job or maybe get a new job,” was his response.

Of course, we know Powell and Daly are lying.

How the Fed Benefits the Investor Class

Austrian school investor Jesse Colombo writes at his site explainingcapitalism.org, “the Fed and the ‘paper’ dollar are the main reasons for America’s growing economic inequality.”

Why is this so?

“In simple terms,” Colombo explains, “inflation benefits the rich while hurting the middle class and poor due to the way each group’s finances are structured.”

In short, the rich receive a significant share of their income from investments, while the middle class primarily relies on their income from labor, and the poor a combination of labor income and government welfare payments.

When the Fed creates new fiat money out of thin air, it isn’t distributed evenly throughout the economy. Instead, it is inserted at specific points, typically via credit to business investors. As the Fed inflates a bubble, speculation with the new money also increases—which inflates the stock market and other major asset classes like housing, benefitting the investor class.

To see just how acute the rise in asset value for the investor class has been, massive fiat money printing has helped the S&P 500 balloon by more than 360% in the last 30 years, a nearly five-fold increase, and more than doubling in the last ten years alone.

Moreover, median home values have nearly tripled over the last 30 years, far surpassing the rate of inflation.

The overwhelming majority of these benefits accrue to a small group of investors.

This June 2 article on quartz.com reported that the “wealthiest 10% of U.S. households owned about 83%” of stock market wealth, according to a 2016 Federal Reserve Bank of St. Louis report.

“The richest one percent of Americans now account for more than half the value of equities owned by U.S. households, according to Goldman Sachs,” reported this February 2020 Financial Post article. Conversely, the bottom 90 percent of households owned just 12 percent of stock market wealth.

Additionally, rapidly rising home prices puts homeownership out of reach for more and more people. “Homeownership is increasingly out of reach for the typical American,” Redfin Chief Economist Daryl Fairweather said in this 2019 HousingWire.com article. “Over the last few years builders have focused on luxury homes, and there hasn’t been enough construction of affordable starter homes.”

After peaking in 2006 before the Great Recession, overall homeownership rates fell from a high of 69 percent to 63 percent in 2016. Ownership rates have been climbing again in recent years, but nevertheless the gains from housing value increases accrue not only to just those who can afford a home, but even more acutely to those in more expensive houses. Meanwhile, non-homeowners and those with lower-priced homes fall further behind.

Racial Wealth Gap

With a sharper and more critical eye being focused on racial issues—and the racial wealth gap in particular—due to recent events, the Fed is due its fair share of blame in this realm as well.

For starters, the benefits of rising home prices fueled by easy Fed money can only benefit actual homeowners. And, according to this February 2020 Urban Institute paper, “the gap between the black and white homeownership rates in the United States has increased to its highest level in 50 years” in 2017.

The white homeownership rate stood at 71.9 percent, compared to just 41.8 percent for blacks.

Furthermore, Federal Reserve data analyzed at capitalist.com shows that 61 percent of white households own publicly traded stock compared to just 31 percent of black households.

Even in middle and upper class households, the discrepancy persists. A March 2019 Investor’s Business Daily article reported that “A 2015 survey by Ariel asked Americans with household income of at least $50,000 whether they owned stocks or stock mutual funds. Eighty-six percent of whites said they did. For African-Americans, the number was 67%.”

In short, as Fed easy money policies benefit stockholders and homeowners, a disproportionate amount of those benefits are going to white households, further exacerbating the racial wealth gap.

Conclusion

There’s little doubt that the Federal Reserve increases wealth inequality overall, but deepens the racial wealth gap as well. The easy money policies of the last decade as the nation attempted to recover from the Great Recession provide a prime example.

As this 2019 MarketWatch.com article noted, “the Fed lowered interest rates, which had the knock-on effect of pushing easy money into the hands of the already-wealthy.”

As Deutsche Bank’s Securities’ chief economist Torsten Sløk said, “The response to the financial crisis was for the Fed to lower interest rates which in turn pushed home prices and stock prices steadily higher over the past decade.”

Like the old state lottery ads used to say “Lotto: You’ve got to be in it to win it.”

Similarly, to “win” benefits from Federal Reserve easy money policies, you’ve got to already be in the stock and homeownership game, i.e. the investor class.

It’s beyond disingenuous for the likes of Powell and Daly to claim the Federal Reserve doesn’t increase inequality. Any discussion of wealth inequality—be it overall or the racial wealth gap—is incomplete without a discussion of the Fed.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter: @erasestate

If I Were a Racist…

If I Were a Racist…

Protests across the nation following the murder of George Floyd have inspired discussions beyond just police brutality, shining a spotlight on issues like “social justice” and “systemic racism.”

But the divisive rhetoric on racism serves to distract from the statism.

If I were a racist, I would support policies that negatively impact minorities. Anything that winds up making their lives and socioeconomic condition worse off would get my approval. On that score, big government could serve as a shining example with a record of harming minorities any racist would envy.

The Welfare State

For starters, if I were a racist I would look at the results of the huge expansion of the welfare state with glee. It’s rumored that President Lyndon Johnson bragged, “I’ll have those niggers voting Democratic for the next 200 years” as a result of his passage not only of civil rights legislation but his massive ratcheting up of the welfare state known as the “Great Society.”

Whether or not Johnson uttered those actual words is immaterial; they were consistent with his racist tendencies. Much more importantly, however, is that the results have reflected the sentiment behind the alleged quote: a growing dependency of the black community on government programs, leading to a devastating destruction of the black family and in turn a deepening cycle of poverty.

Poor and dependent people will reliably vote for the party promising to continue and increase the flow of benefits.
Welfare programs championed by Johnson and progressives break up families by replacing a father’s paycheck with a government check and benefits. Nationally, since LBJ’s Great Society ratcheted up government welfare programs in the mid-1960s, the rate of unmarried births has tripled.

This effect has been especially acute in black families, as more than 70 percent of all black children today are born to an unmarried mother, a three-fold increase.

According to 2017 American Community Survey data produced by the U.S. Census Bureau, only 5.3% of families with a married couple live in poverty nationally, compared to 28.8% of households with a “female householder, no husband present.”
In other words, single mother households are five times as likely to be in poverty compared to households with both parents. Largely as a result of the breakdown of the black family, 20 percent of blacks live in poverty, more than twice the rate of whites (8%).

As economist Thomas Sowell once wrote, “The black family survived centuries of slavery and generations of Jim Crow, but it has disintegrated in the wake of the liberals’ expansion of the welfare state.”

Indeed, if a group of racist Klan members conspired to develop a plan to impoverish black households, they could have not done much better than the exploding welfare state.

The Minimum Wage

If I were a racist, I would want to see to it that young black people coming from broken, low-income homes have a harder time entering the workforce, making it more difficult to escape poverty.

The minimum wage accomplishes that.

The economic lesson is obvious: artificially increasing the wage employers must pay decreases the demand for low-skilled workers, while drawing more demand from prospective workers to fill these positions. Low-skilled labor is priced out of the workforce as a result.

History has shown that black teenagers are hit the hardest by minimum wage hikes.

Research by Sowell underscores this point: “Unemployment among 16 and 17-year-old black males was no higher than among white males of the same age in 1948. It was only after a series of minimum wage escalations began that black male teenage unemployment rates not only skyrocketed but became more than double the unemployment rates among white male teenagers.”

Indeed, there is ample research showing that the minimum wage’s origin was inspired by racism. Such historic facts led economist Walter E. Williams to label the minimum wage “one of the most effective tools in the arsenal of racists everywhere in the world.”

Putting the first rung of the career ladder out of reach to young blacks is a great way to frustrate them and push them towards either a life of crime or government dependency. Far too many end up hopeless in prison or in the ghetto—right where racists want them.

Gun Control

Seeing to it that more blacks are stuck in a cycle of government dependency and hopelessness, and packed in close quarters in inner cities, I’d be pretty confident that those inner cities would have high rates of violent crime.

So if I were a racist, I’d want to take away the right to legally defend oneself by imposing strict gun control laws. This way, the honest citizens living in the violent inner cities would have no way to defend themselves against the criminals.

As Maj Toure of Black Guns Matter says, “All gun control is racist.”

Research on the history of gun control laws strongly suggests racist motives compelling these restrictions for hundreds of years. According to the website firearmsandliberty.com, “The historical record provides compelling evidence that racism underlies gun control laws—and not in any subtle way. Throughout much of American history, gun control was openly stated as a method for keeping blacks and Hispanics ‘in their place,’ and to quiet the racial fears of whites.”

One of the top priorities of the Ku Klux Klan after the Civil War was to enact laws barring gun ownership by the freedmen, making it all the easier to terrorize them.

Today, however, there’s no need to put on a white hood and lynch anybody, just see to it that blacks are defenseless and let the criminals handle the rest.

School Choice

If I were a racist, I’d want to block any attempt to make better educational opportunities available for minorities. The government indoctrination centers known as public schools are not only systemically incapable of providing high quality education for children, they have especially failed minority kids.

As Walter E. Williams has written, “the average black 12th-grader has the academic achievement level of the average white seventh- or eighth-grader. In some cities, there’s an even larger achievement gap.”

The ultimate goal, of course, is to separate school and state (and eliminate the state altogether). But short of that, we need to shift more control over educational choices out of the hands of politicians and bureaucrats and into the hands of parents and families.

Such policies are highly popular among minority families. Indeed, a 2018 national survey by Education Next found that Hispanic (62%) and black (56%) respondents expressed far higher support for school choice initiatives targeted to low-income families than whites (35%).

Results like this suggest that low-income, minority families recognize the status quo is not working, and they are craving policies that would enable them to access other educational options.

Those opposing policies that would provide minorities such options may not be motivated by racism, but one would be hard pressed to say how their actions would be different if they were.

War on Drugs

If I were a racist, I would no doubt enjoy the results of the government’s failed “war on drugs.” The war on drugs has put thousands of minorities in prison for crimes emerging from the government’s attempt to dictate to citizens what they can or cannot put in their own bodies. According to the Drug Policy Alliance, “Nearly 80% of people in federal prison and almost 60% of people in state prison for drug offenses are black or Latino.”

Moreover, the Drug Policy Alliance notes “2.7 million children are growing up in U.S. households in which one or more parents are incarcerated. Two-thirds of these parents are incarcerated for nonviolent offenses, including a substantial proportion who are incarcerated for drug law violations.” The drug war, like the war on poverty, is a major factor in fatherless homes in the black community.

The war on drugs has devastated minority communities, and its enforcement has greatly increased the number of confrontations between police and minorities; which in turn increases the opportunity for police brutality cases.

Conclusion

Big government has arguably been the biggest enemy of minorities. Indeed, an examination of the results of government control and intervention looks an awful lot like something racists would support.

Instead of pitting white against black to divide us, to achieve more justice for minorities we must instead focus our energy on dismantling the state.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter, @erasestate.

The State’s Priority Is Protecting Itself, Not You

The State’s Priority Is Protecting Itself, Not You

Murray Rothbard pointed out in his book Anatomy of the State how the state is far more punitive against those that threaten the comfort and authority of government institutions and workers than they are against crimes against citizens.

This, according to Rothbard, exposed as a myth the notion that the state exists to protect its citizens.

“We may test the hypothesis that the State is largely interested in protecting itself rather than its subjects by asking: which category of crimes does the State pursue and punish most intensely—those against private citizens or those against itself?” Rothbard wrote.

“The gravest crimes in the State’s lexicon are almost invariably not invasions of private person or property, but dangers to its own contentment, for example, treason, desertion of a soldier to the enemy, failure to register for the draft, subversion and subversive conspiracy, assassination of rulers and such economic crimes against the State as counterfeiting its money or evasion of its income tax.”

Boy how recent events have proven Rothbard right.

For weeks, we saw police aggressively pursuing and punishing peaceful people merely violating arbitrary lockdown orders to go surfing, cut hair, or host a child’s play date.

But in the first nights of the George Floyd protests, police allowed rioters to run amok destroying property, with political leaders dismissing the damage as unimportant.

This stark contrast in police responses dramatically underscores Rothbard’s point.

Take the first nights of rioting in Minneapolis. As reported by the Manhattan Institute’s City Journal, Minneapolis Mayor Jacob Frey, the source of the “police stand-down order that allowed his own city to burn,” merely “shrugged off responsibility and minimized the damage.” Moreover, according to the report, “Frey kept repeating that the destruction was ‘just brick and mortar.’”

And consider the example of Raleigh, North Carolina Police Chief Cassandra Deck-Brown, who said:

When the greater risk is of injury to the officer, and I had five injured last night – a building? A window? A door? The property that was in it can easily be replaced. But for a person who has had officers shot. And more recently than not, I will not put an officer in harm’s way to protect the property inside of a building. Because insurance is most likely going to cover that as well but that officer’s safety is of the utmost importance.

Got that? The officer’s safety is the primary concern, not the property of citizens. Agents of the state whose sole job is supposedly to protect the people and their property instead refuse to do their job at the first hint of danger.

Worse still, as Ryan McMaken pointed out in a recent article at Mises.org, “A failure to protect taxpaying citizens from violence and crime in a wide variety of situations is standard operating procedure for police departments that are under no legal obligation to protect anyone, and where ‘officer safety’ is the number one priority.”

McMaken further notes that it is “now a well-established legal principle in the United States that police officers and police departments are not legally responsible for refusing to intervene in cases where private citizens are in imminent danger or even in the process of being victimized.”

Police absence during riots is nothing new. As McMaken wrote: “During the 2014 riots that followed the police killing of Michael Brown, for example, shopkeepers were forced to hire private security, and many had to rely on armed volunteers for protection from looters. ‘There’s no police,’ one Ferguson shopkeeper told Fox News at the time. ‘We trusted the police to keep it peaceful; they didn’t do their job.”’

As the violence of the riots intensified, mayors instructed police forces in cities across the nation to step up their presence.

But their initial reactions are the most telling.

The contrast between police actions against peaceful lockdown “violators” and the rioters is striking. The instincts of the political class was to haul mothers in parks and hair stylists away in handcuffs, while standing down and allowing private property owned by citizens to burn.

The former involved disobeying a government order, an act which would threaten the perceived authority, no matter how arbitrary, of the state. The latter involved violation and destruction of citizens’ property.

As Rothbard would have predicted, the state was far more interested in preserving the illusion of its authority than the property of its citizens.

Putting a tragic, but fine, point to Rothbard’s point: George Floyd was choked to death by a police officer sent to detain him for the “crime” of using a counterfeit $20 bill to buy cigarettes.

The state is not us. It does not exist to protect our person or property. It exists first and foremost for its own benefit and to exert power and control over its subjects.

Events of the past several weeks should make this crystal clear.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on twitter, @erasestate

The Statist Origins of Modern Health Insurance

The Statist Origins of Modern Health Insurance

With roughly 36 million people having filed for unemployment across the country in the last two months in the wake of the coronavirus shutdown, one issue receiving more scrutiny from some quarters is the issue of employer-based health insurance.

With so many laid off temporarily or permanently out of work, there is increasing concern about how many of those will be uninsured because when they lost their job, they also lost their source of health insurance.

About half of Americans receive their health insurance through an employer-sponsored plan, which means the recent layoffs could potentially swell the ranks of the uninsured by 18 million.

Concern over this trend has prompted a growing chorus of those attempting to mount an opposition to America’s heavy reliance on employer-sponsored insurance. For example, Rep. Ilhan Omar’s tweet below which garnered more than 76 thousand likes at the time of this writing.

Ilhan Omar Tweet

This raises the question, however: Why is health insurance tied so closely to employment in the first place?

The answer should come as no surprise to readers of this site: government intervention.

As this 2017 New York Times article describes, when we look back a hundred years ago, “Most insurance in the first half of the 20th century was bought privately, but few people wanted it.”

Few people wanted insurance because there was not much medical care that needed to be insured.

The medical treatment and technology available at the time was very limited. But as doctors learned to treat more illnesses and medical technology advanced, the healthcare industry likewise began to expand which brought increasing procedures and treatments to be paid for.

In response, hospitals formed Blue Cross in 1939 as an insurance pool to help patients pay for treatment, and physicians formed Blue Shield at about the same time.

A gradual increase in insurance coverage followed.

Then, as the Times reported, “Things changed during World War II.”

“In 1942, with so many eligible workers diverted to military service, the nation was facing a severe labor shortage. Economists feared that businesses would keep raising salaries to compete for workers, and that inflation would spiral out of control as the country came out of the Depression.”

In response, President Roosevelt signed Executive Order 9250, establishing the Office of Economic Stabilization. This order, among other things, froze wages. “Businesses were not allowed to raise pay to attract workers,” the Times noted.

Progressives and anti-capitalists would lead you to believe that this situation would be perfect for greedy business owners. The executive order would give them cover for what they want to do anyways—which is to exploit workers and pay them slave wages.

But reality has a way of bursting progressive’s ideological bubbles.

Instead of gleefully colluding to keep worker compensation suppressed, businesses instead “began to use benefits to compete. Specifically, to offer more, and more generous, health care insurance,” the Times reported.

“Then, in 1943, the Internal Revenue Service decided that employer-based health insurance should be exempt from taxation. This made it cheaper to get health insurance through a job than by other means,” the Times continued.

As an employer, if you could choose between paying a worker, say, an additional salary of $10,000 or pay $10,000 for their health insurance premiums tax free, there is significant incentive for the employer to opt for the insurance coverage.

And even if the employer simply provides the option of enrollment in an employer-provided plan, and requires the worker to pay for those premiums, the employee gets to do so with pre-tax income. There is still strong incentive for the employer and employee to accept insurance coverage in lieu of higher salary.

As University of Alabama-Birmingham health economist Michael A. Morrisey explains, “employers and their employees have a strong incentive to substitute broader and deeper health insurance coverage for money wages. Someone in the 27 percent federal income tax bracket, paying 5 percent state income tax and 7.65 percent in Social Security and Medicare taxes, would find that an extra dollar of employer-sponsored health insurance effectively costs him less than sixty-one cents.”

Roosevelt’s order let the employer-sponsored health insurance genie out of the bottle. And employer-sponsored insurance coverage growth was the driving force in a major increase in overall insurance coverage. As the Times reported, “In 1940, about 9 percent of Americans had some form of health insurance. By 1950, more than 50 percent did. By 1960, more than two-thirds did.”

The stronger the tie between employment and health insurance, the more significant becomes the issue of “job lock.”

This is a situation where workers fear losing or leaving their job because it means also losing their health insurance coverage; which in turn could also mean losing access to their preferred doctor.

Which brings us back to the current situation.

The concern about “job lock” and the close connection between employment and health insurance is legitimate, and has certainly been highlighted by the current crisis.

It’s surely not a stretch of the imagination, however, to conclude that progressives like Omar’s solution is to transition to a government-run single payer scheme like Medicare for All.

But as this debate heats up as more lose their jobs, it is important to understand why health insurance is so closely tied to employment in the first place. The fact that leftists desire to ‘fix’ a problem created by government intervention with still further government control is an irony apparently overlooked.

Ludwig von Mises warned us nearly a hundred years ago that government intervention begets more intervention.

As he wrote in 1929, “isolated intervention fails to achieve what its sponsors hoped to achieve. From their point of view, intervention is not only useless, but wholly unsuitable because it aggravates the ‘evil’ it meant to alleviate.”

Once the interventionists realize their interventions made things worse, Mises argued, they are “not inclined” to remove the initial intervention, but rather seek to address the new problems with still more interventions. The new interventions create new problems, and the cycle repeats, ad nauseam.

His words ring true now more than ever. They should not be ignored.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on twitter, @erasestate

Coronavirus Lockdown: The Political Versus The Voluntary

Coronavirus Lockdown: The Political Versus The Voluntary

The government lockdown is mandatory. Re-opening is not.

The contrast is stark and worth exploring as it underscores the difference between political and voluntary means of organizing society.

Most states have imposed mandatory “stay at home” or “shelter in place” orders, confining citizens to house arrest, save for ‘essential’ purposes like buying food or prescription medicine. This represents the political means of organizing society. These are typically one-size-fits-all orders from state governors that allow for no leeway based upon each state’s widely diverse population demographics and densities.

Rural, sparsely populated counties are treated the same as highly concentrated big cities. Children, with virtually no risk of catching or suffering any symptoms from the coronavirus, are forced to abide by the same rules as older and much more vulnerable populations with underlying conditions.

Disobedience is punished harshly.

Police, who are trained to “just follow orders” rather than exercise rational discretion, have been caught on video enforcing these rules in ways that would be comical if they weren’t so tragic. Social media has been filled with viral examples like police chasing down an individual surfing at a California beach, a mother being handcuffed for taking her children to a public park, and most recently Wisconsin police harassing a mother for having the gall to allow her child to play outside with a friend at her neighbor’s house.

Bear in mind, in this “democracy” that we live in, nobody had the chance to vote over whether or not we will all be involuntarily confined to our homes and universally assumed to be a threat to the health of others – all without any due process. Our rulers once again unilaterally changed the terms of the alleged “social contract” without so much as the appearance of getting the consent of the governed.

Conversely, those states that are now lifting their restrictions are allowing certain businesses the option of opening back up, and likewise allowing consumers the option of frequenting certain establishments. Granted, certain social distancing guidelines and other restrictions remain in place, but the choice is left to the business owners and consumers – the citizens – to evaluate their risk in so doing.

This represents, albeit far from perfectly, a glimmer of how a society based on voluntary means is organized. Businesses are not forced to re-open. They can make that decision themselves.

Indeed, in the state of Georgia, where the governor has once again allowed restaurants to re-open for dine-in eating, a group of more than 50 restaurant owners in Atlanta and Savannah have publicly announced their decision to remain closed to dine-in customers.

“We agree that it’s in the best interest of our employees, our guest, our community, and our industry to keep our dining room closed at this time,” their statement reads.

Fine for them. It is their property and they are well within their right to keep their dining rooms closed to the public. But it is their choice.

They didn’t have a choice in closing up their dining rooms in the first place, however. That choice was forcibly taken from them by the governor.

Dramatically symbolizing the ideological difference between the political and voluntary means of organizing society was a widely-viewed interview between CNN’s Anderson Cooper and Las Vegas Mayor Carolyn Goodman.

This is certainly not to defend everything Goodman said in the interview, but rather to focus in on one aspect in particular. When asked by Cooper about her desire to re-open her city, and what rules she would impose on the casinos, Goodman responded “That’s up to them to figure out. I don’t run a casino.”

The reaction of Cooper and so many others to the interview was quite telling.

The fact that a politician would publicly proclaim her humility and declare that property owners would know better how to safely run their own property better than the politicians was beyond the pale to the masses of statist worshippers of so-called “experts.”

Cooper did an on-air, double face palm he was so stunned. Social media and others universally condemned Goodman, calling the interview “bizarre,” “lunacy,” and declaring that Goodman “embarrassed herself.”

Of course no politician or cable news host knows better how to best, and most safely, utilize property better than the property owners themselves.

But those that subscribe to the centralized, top-down political means of organizing society simply could not mentally process such a thought, and anybody straying from their doctrine must be ostracized.

The coronavirus health scare and the government’s reaction have helped to highlight the stark contrast between competing ideologies. Namely, the debate between those that favor the political means for organizing society versus those that favor the voluntary means.

The political means involves forcible compliance to mandatory, centralized, one-size-fits all orders, while the voluntary means involves de-centralized options determined by the very individuals best positioned to determine the risks and reward of their freely chosen actions.

The growing number and size of public protests indicates that more and more are beginning to recognize the ugly reality of organizing society by political means and demand instead a free society.

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics. Follow him on Twitter, @erasestate.

Pandemic Hospital Layoffs Reveal the Prevalence of Wasteful Healthcare Spending

Pandemic Hospital Layoffs Reveal the Prevalence of Wasteful Healthcare Spending

Aside from a few hotspots like New York City or Detroit, hospitals across the country are at such low capacities that many are laying off staff and seeing their bottom lines threatened during the current coronavirus pandemic.

For instance, in my home state of North Carolina it was reported “After hospitals and doctor’s offices across North Carolina canceled nonessential procedures and in-person appointments because of the coronavirus pandemic, many nurses and medical staff were laid off or had their hours reduced.”

“It’s definitely not the situation you might think would happen during a pandemic,” said North Carolina Nurses Association CEO Tina Gordon.

According to an April 14 article in The Guardian, “43,000 healthcare jobs were lost in March 2020” throughout the U.S, and “the HealthLandscape and American Academy of Family Physicians issued a report estimating by June 2020, 60,000 family medical practices will close or scale back, affecting 800,000 workers.”

An April 1 McClatchy article reported that hospitals “are now losing 40% to 50% of their revenue.” Meanwhile, the American Hospital Association, according to the Business Insider, “has sounded the alarm about the industry’s financial difficulties and said that quickly distributing funding from the CARES Act would help facilities keep their doors open.”

It is of course welcome news that hospitals have not been universally overwhelmed during the pandemic.

Some of the downturn in hospital finances is attributable to routine visits being cancelled amid stay-at-home orders along with delays in non-emergent procedures like hip or knee replacement.

The current situation does, however, help underscore a little-discussed cause of the nation’s rise in healthcare costs: unnecessary treatment.

Unnecessary procedures make up one-fourth of healthcare spending

According to the Institute of Medicine, “unnecessary tests, prescription drugs and medication, and extraneous procedures are one of the drivers of healthcare price inflation.”

Just how significant of a factor is unnecessary procedures and testing in the healthcare industry?

More than you may think.

According to a February 2018 report by ProPublica, “The waste is widespread – estimated at $765 billion a year by the National Academy of Medicine, about a fourth of all the money spent each year on health care.”

ProPublica described the waste as “one of the intractable financial boondoggles of the U.S. health care system,” in which tons of patients “get lots and lots of tests and procedures that they don’t need.”

“Women still get annual cervical cancer testing even when it’s recommended every three to five years for most women. Healthy patients are subjected to slates of unnecessary lab work before elective procedures. Doctors routinely order annual electrocardiograms and other heart tests for people who don’t need them,” the article continued.

The ProPublica report referenced a study by the Washington Health Alliance, a nonprofit dedicated to making care safer and more affordable, which found “almost half the care examined was wasteful.”

Shockingly, as reported in this Greenimaging.net article, “85 percent of doctors admitted calling for too many tests, 97 percent agreed to personally ordering unnecessary tests.”

Of the unnecessary procedures, the American healthcare system wastes $200 billion per year on unnecessary medical tests alone, according to the Lown Institute.

The Washington Health Alliance study further noted that much of the waste “comprised the sort of low-cost, ubiquitous tests and treatments that don’t garner a second look.”  But as Susie Dade, deputy director of the alliance and primary author of the report noted “little things add up. It’s easy for a single doctor and patient to say, ‘Why not do this test? What difference does it make?'”

Indeed, this question helps inform us why unneeded procedures and tests are so prevalent.

Rise of third-party payments

To the patient, such unnecessary tests and procedures typically make little or no difference financially.

Steadily rising government intervention into the healthcare industry over the past several decades has created a system in which roughly 90 cents of every healthcare dollar is paid for by a third party.

As noted in this 2017 study by the Mercatus Center at George Mason University, “In 1960, patients controlled how almost 50 cents of each dollar spent on health care was paid. That number is now down to just over 10 cents, with the rest controlled by third-party payers.”

Third-party payers include Medicare, Medicaid and private insurance coverage. Government programs of course require little to nothing financially from enrollees. Meanwhile, government mandates requiring health insurance plans cover an ever-expanding list of providers and procedures drives up premiums while driving down the cost to patients at the point of care. Patients come to view their insurance as a sort of pre-paid medical card. If I am paying $1,000 a month on premiums, I want to get my money’s worth from my doctor. If additional unnecessary procedures and tests cost very little or nothing on the margin, why not go ahead with it?

Moreover, because health insurance benefits are tax exempt for employers, the majority of people receive their health insurance through their job. Rather than individuals negotiating the benefits and premiums of their insurance coverage with their provider, employees are covered by plans negotiated between their company’s HR department and the insurance provider.

As a result of government intervention, patients are largely insulated from bearing any cost for wasteful unnecessary procedures and tests. At most, their out-of-pocket charge will be a nominal co-pay, while many procedures – especially if covered by Medicare or Medicaid – will cost them nothing at all.

And such a situation rises above a simple “better safe than sorry” type scenario, where the unnecessary procedures can provide peace of mind with no downside. As the unnecessary procedures inflate healthcare costs, more people are priced out of the insurance market while others forego more urgent care out of fear of unaffordable bills.

Conclusion

The third-party payer system incentivizes mass amounts of wasteful and unnecessary healthcare spending. It costs the patient very little or nothing, while the doctors and hospitals can help pad their bottom lines by billing the government programs or insurance providers.

The current pandemic has put a halt to much of these unnecessary procedures as hospitals focus on coronavirus preparedness. And now hospitals are seeing their finances suffer significantly, revealing just how substantial a factor unnecessary procedures are to the rising costs of healthcare. The best way to significantly reduce such wasteful spending is to peel back the layers of government intervention that encourage it.

How the CARES Act Will Delay Economic Recovery

How the CARES Act Will Delay Economic Recovery

The economic fallout of the government’s shutdown in response to the coronavirus pandemic has been unprecedented.

Nearly ten million people have filed for unemployment benefits in just two weeks. The 6.6 million claims from the last week of March doubled the previous week, and both weeks smashed the previous one-week record of 700,000 claims in 1982.

To mitigate the damage of this mass level of unemployment, the federal “stimulus” bill, called the Coronavirus Aid, Relief, and Economic Security Act (CARES Act), includes two key provisions that will serve to prolong the negative economic impact of the shutdown: bailouts to big businesses and the $600 a week in unemployment benefits in addition to state level benefits for eligible recipients.

The bailout payments to big businesses, like the airlines, not only rewards risky behavior but will just delay the inevitable restructuring that will need to take place.

For instance, American Airlines and Boeing, rather than building up cash reserves during the past ten years of flush economic times, instead leveraged low-interest rates (courtesy of mad Fed money printing) to engage in billions of dollars worth of stock buybacks to benefit from the stock market bubble. Now, rather than selling their stocks to raise liquidity as the prices tumble, they will rely again on a taxpayer-funded bailout.  

Furthermore, the bailouts will largely just enable big businesses to stay afloat during the remainder of the shutdown, delaying layoffs that will likely be necessary as the travel industry will be slow to recover due to a public remaining uncertain about the health risks of travel. 

So at a time when the economy is attempting to “re-open,” the businesses that had been propped up during the shutdown will need to engage in another round of layoffs, prolonging any recovery efforts. 

Also damaging to the labor market as the economy attempts to re-start will be the enhanced unemployment benefits. 

 “The $600 weekly unemployment compensation boost included in the CARES Act will provide valuable support to American workers and their families during this challenging time,” said Secretary of Labor Eugene Scalia.

Indeed, the financial support will be critical for those laid off through no fault of their own.

Such benefits, however, will significantly hamper any effort to “re-open” the economy once the pandemic fears erode, and may prove to be very difficult to eliminate.

A cursory look at the data shows that many of those out of work will be getting paid more not to work than they did to work.

Examining Bureau of Labor Statistics data, this article in The Street found “the median income for a full-time wage or salary worker on a weekly basis was $936. For a 40-hour work week, this translates to a yearly income of approximately $48,672.”

Comparatively, a 2019 USA Today article evaluating 2018 state unemployment benefits data reported the average national weekly unemployment payout of $347 a week. Add to this the $600 a week from the CARES act, and that comes to $947 per week, or $49,244 on an annualized basis.

In other words, the average unemployed person receiving benefits due to the coronavirus shutdown would be receiving more income than the national median income from working. Granted, these figures are broad aggregates, but still illustrate the point that many will be receiving more income being unemployed than they would if they chose to return to work.

The federal supplements are currently scheduled to last four months – roughly to the end of July.

Now imagine, using an optimistic scenario, most of the nation begins to wind down their economic shutdowns by mid-May or early June, meaning many workers would still have four to six weeks of eligibility to receive the generous unemployment benefits.

Of those businesses seeking to re-hire workers to help ramp up production and services to customers, many will find it difficult to do so. Unemployed workers who can receive more income staying at home instead of returning to work will choose to stay at home as long as the unemployment checks continue to roll in. Most states have waived the requirement to be seeking work to receive unemployment benefits, so there would be no pressure to do so. 

Returning to work for many would make them financially worse off. Some employers would also offer benefits like health insurance, but many jobs in the hospitality industry – where the majority of jobs have been lost – do not. While many would be eager to return to work to regain a sense of purpose, many others would make the economically-rational choice to continue receiving the higher level of income while avoiding the disutility of work. 

And this effect would reach beyond more than just those that could receive more income staying at home. For some, even the opportunity to earn more money working rather than remaining unemployed would not be deemed to be worth it, once we take the marginal benefits and costs into consideration.

Say someone receiving $947 per week in unemployment benefits has an opportunity to return to a job paying $1,000 a week. Obvious choice, right?

Maybe not.

The choice isn’t simply between receiving $947 a week versus $1,000 a week, but also working 40 hours a week versus zero hours. On the margin, this person would be receiving $53 more a week, but having to work 40 hours to earn that marginal benefit. On the margin, returning to work would yield this person about $1.33 per hour. Many would find this unappealing.

The federal government’s paying out of these additional benefits will surely provide a much-needed financial lifeline to millions forced out of work. But it’s also important to acknowledge how they will make it far more difficult to get the economy going again. Many businesses will find it difficult to once again staff their operations while the benefits continue. 

The notion of generous unemployment benefits discouraging work is not some right-wing, or free market ideological talking point. Even the New York Times resident left-wing economist Paul Krugman acknowledges that extended unemployment benefits will likewise extend higher levels of unemployment. In his 2010 economics textbook, Krugman stated “Public policy designed to help workers who lose their jobs can lead to structural unemployment as an unintended side effect.” He explains that granting more generous benefits “reduces a worker’s incentive to quickly find a new job. Generous unemployment benefits in some European countries are widely believed to be one of the main causes of ‘Eurosclerosis,’ the persistent high unemployment that affects a number of European countries.”

Moreover, these benefits will likely prove to be very politically difficult to end. Indeed, before the first checks have even been cut, Nancy Pelosi has been promoting the idea of extending the benefits through September. 

Imagine if unemployment remains high, perhaps in or near double digits, and Congress finds itself debating whether or not to cut millions of out of work American off from these federal benefits just two months before a national election.

Any guesses how that turns out?

The government has shut down the economy, forcing millions out of work. It’s understandable for them to also take measures to cushion the financial blow dealt to those made unemployed because of their decision.

What’s also important is to understand that these actions will most likely prolong any desired “re-start” of the economy, and these supposedly temporary unemployment benefits will prove to be very difficult to eliminate in an election year. 

Panic Buying, Medical Rationing Underscore Importance of Free Markets

Panic Buying, Medical Rationing Underscore Importance of Free Markets

The recent coronavirus panic has provided a stark reminder about the scarcity of economic goods. From people hoarding and stockpiling common household items like toilet paper and hand sanitizer to the downright morbid reports of doctors in Italy and Spain having to pick and choose who should receive medical care, the issue of resource scarcity has been thrust front and center.

To be clear, when economists refer to scarcity, it doesn’t just refer to empty shelves or a general lack of supply of something. Instead, we mean that goods are objects of choice: its use for one purpose or user precludes it from use for another purpose or user.

A bottle of hand sanitizer is scarce because when one person uses it for his hands, it is not available for another person’s use. Ventilators and hospital beds are also scarce; if Jane is using a bed and ventilator, it is not available for John’s use.

This leads us to conclude a key economic truth: all goods must be rationed. How a society overcomes this issue of scarcity and the method of rationing scarce goods determines that society’s well-being and standard of living.

 When the method of rationing facilitates efficient allocation of resources toward society’s most urgent needs, while encouraging productive behavior, the economy will flourish. If an inefficient means of resource allocation is used, poverty and shortages follow. 

Moreover, the issue of scarcity gives rise to the dilemma of multiple people desiring to lay claim to the same resource. Therefore, the method by which scarce goods are allocated will determine how people compete to obtain that good. 

So, what are some methods by which scarce goods are allocated, and what does the current crisis reveal about each one?

First come, first served: Under this method, whoever is first to claim or physically obtain the good gets to keep it. Time becomes a currency of sorts in this method, as those willing to forego other uses of their time in order to be among the first in line will be rewarded. It may also involve a little luck as well, with those who happen to be closest to some valuable good having the greatest ease of getting to it first. 

We’ve witnessed this method emerge with the panic buying of toilet paper and hand sanitizer because prices have not been not allowed to adjust due to anti-price gouging laws. Those willing and able to get to the front of the line clear out the shelves, leaving nothing for everyone else. 

When freely adjusting prices aren’t allowed to work, and instead a method of first come, first served emerges, the cost to consumers is time. Those willing to pay the highest cost in terms of time (i.e. spend hours waiting for a store to open so they are first in line) acquire the most goods.

Unfortunately, this method does not allow prices to reflect relative demand and scarcity, preventing valuable signals to guide producers to direct goods where they are most urgently needed.  And this method does not encourage productive behavior, as those consumers who spend more time waiting in lines rather than working are rewarded.

Critics claim that allowing prices to rise rapidly during emergencies may price some completely out of the market for a much-needed product during a time of distress. But empty shelves created by shortages also force many to go without. And the only way to bring prices back down without causing shortages and heavy time costs on consumers (via long lines) is to allow for prices to signal to producers to direct current supplies to where they are in most short supply, and incentivize them to produce more of the good in question. Freely adjusting prices can rapidly enable supply to surge and meet demand, and bring prices back down.  

An authority distributes goods based on “need”: Under this method an authority figure decides who gets what, by determining who is in most desperate need. Concentrating so much power over scarce goods into the hands of a single person or committee invites corruption. As such, people are incentivized to bribe or threaten the decision-makers to obtain what they desire. Lobbying becomes more rewarding than investments in productivity.

Moreover, attempting to distribute by “need” subjects distribution to the arbitrary definition of “need” by the authority figure. Potential consumers are incentivized to remain “needy” according to the definition of the authorities in order to gain access to goods and services. Think of the poverty trap created by the welfare state.

This also gives rise to the rationing of medical care we’ve seen emerge in countries like Italy and Spain, where the authorities are determining that young people are more worthy of scarce medical care during the coronavirus pandemic than older people who have fewer quality years of life left. 

This method also removes crucial price signals that would both incentivize increased production of those goods and services in most urgent demand, and the distribution of these goods to where they are most urgently needed. The costs can be fatal.

People have little incentive to be productive out of fear of losing access to goods and services because the authority may not deem them “needy” enough. 

Neither of those options seems like a particularly efficient (or fair) means by which to allocate scarce resources. Which brings us to:

Exchange of private property with freely adjusting prices: Private property implies that goods have an owner, and that owner is the one with just and legal authority to determine how that good is used. The owner can consume it, use it for productive purposes, stockpile it or trade it. One acquires rights over (already owned) property thru voluntary exchange, whether those exchanges involve goods for goods, goods for money, or money for labor.

Under such a system, in order to compete for desired goods, one must offer something of value in exchange, unlike the other previously mentioned methods. This incentivizes greater productivity – the key to improving the standard of living for a society.

Furthermore, not only does this system create a greater abundance of goods and services desired by society, but it more efficiently allocates them to their most urgent uses. 

Price signals provide valuable information and incentives to market participants. High prices of relatively scarce goods incentivize consumers to economize on the more expensive goods, while also encouraging producers to create more of that good in pursuit of higher revenue and profits. Shortages vanish.

Low prices encourage consumers to buy more, while telling producers that their productive resources are more urgently needed elsewhere. Surpluses are eliminated. 

The method society chooses for how scarce resources are allocated will generate very different types of behavior, and results.

The coronavirus panic has revealed that when government interferes with market prices and the exchange of private property, other means of distribution will emerge. These other methods, however, are far less efficient and more unfair. 

A system based on private property rights and free exchange based on freely adjusting prices provides the framework for the most efficient and fair allocation of scarce resources, while also encouraging more productive activity. The result is a more prosperous society, one far better equipped to meet society’s most urgent needs, especially so during times of emergency.

 

Bradley Thomas is creator of the website Erasethestate.com and is a libertarian activist who enjoys researching and writing on the freedom philosophy and Austrian economics.

Follow him on twitter: Bradley Thomas @erasestate

 

 

Book Foolssm

Fool’s Errand: Time to End the War in Afghanistan

by Scott Horton

Book Paulsm

The Great Ron Paul

by Scott Horton

Book Griggsm

No Quarter: The Ravings of William Norman Grigg

by Will Grigg

Book Animalssm

What Social Animals Owe to Each Other

by Sheldon Richman

Book Palestinesm

Coming to Palestine

by Sheldon Richman

Pin It on Pinterest