The gold standard argument for laissez faire advocates (besides arguments for an actual gold standard) has always been that regulations hurt businesses. By intruding on businesses and making them follow certain rules they say, government is actually hurting the economy in an unnecessary way and hurting profits, impeding investment and costing jobs. Just let businesses use their own discretion they say, and the market’s self-regulatory power will sort things out, as consumers will ultimately decide to avoid those businesses which are negligent or causing more harm than good. Economic data confirms this they say. But is this really true? Let’s analyze.
Regulations Do In Fact Cause Inefficiency
The laissez faire advocates do make some good points in their anti-regulatory arguments. No one can deny that regulations often increase the costs of doing business. By increasing the cost of doing business ultimately they can have the effect of reducing the marginal utility of capital, which in turn makes investment less profitable. This can have the effect of reducing capital investment which ultimately translates to less jobs and higher unemployment. It can also have the effect of impeding innovations by slowing down the time it takes for new products and services to enter the market.
For example, in the pharmaceutical industry a potentially life-saving drug may delayed from entering the market for several years due to stringent regulations, and in the meantime people could die from illnesses which could have been prevented had those drugs been available. It can also reduce incentives to even put money into developing such products in the first place. Anyone familiar with the pharmaceutical industry knows that drug development is a crapshoot in many ways. Pipelines send capital to highly expensive research facilities, which may or may not develop an effective drug. Even if the research facility is successful in developing an effective drug, the company still needs to pass the stringent FDA guidelines to get it on the market. This may necessitate thousands of hours and billions of dollars put into things like animal and petri dish testing, controlled trials, long term studies designed to look for latent side effects, and rigorous and painstaking procedures to look for things like potential contraindications and negative drug interactions. Companies can invest billions into a drug over many years, only to go bankrupt right before receiving FDA approval.
The stakes are certainly high, and this often causes pharmaceutical companies to avoid making the investments in new and cutting edge drugs right from the start, in favor of safer alternatives. For example, opiates have long been the workhorse of pain relief in medicine, however, they carry the notorious side effects of addiction. For decades companies have wanted to develop a holy grail of medicine: a painkiller which is just as effective as opiates, but does not carry with it the risk of addiction and abuse. Such a drug would undoubtedly be revolutionary, and the person or company which discovers could make billions, possibly even trillions over the long run. However, this is not so simple, in order to find such a drug one would have to look outside the opiate family and experiment with new and cutting edge formulas for which one is making the pharmacological equivalent of Christopher Columbus or the Apollo space program, boldly venturing into a great unknown, with risks and dangers and contingencies which are simply immeasurable. Many companies have tried and failed to develop such a drug, as terrible side effects undoubtedly emerge. And many a man has gone bankrupt searching for this holy-grail. Because of the huge financial risks involved, many companies seeking to develop painkillers often opt to play it safe, and instead simply develop new kinds of opiates. Humans have been using opiates for thousands of years, we have known about their terrible side effects for just as long. Today we have hundreds of varieties of opiate drugs, each one possessing the power to turn you into a hopeless junky. And while we have more varieties of opiates than we know what to do with, we still don’t have that magical holy grail of pain medicine that so many have sought and prayed for.
Free market advocates say that the regulatory burdens of the FDA only hurt things like the search for alternative painkillers, as the expensive and burdensome approval process only increases the costs of cutting edge research even more, and that this is why so many companies decide to only develop new forms of heroin rather than try to create opiate alternatives. Milton Friedman famously called the FDA “frustrating drug advancement” because of its supposed burden on the process. Let the market decide, say people like Friedman, allow companies to develop drugs and put them on the market without having to go through an approval process, and we will see faster innovation and put us on the fast track to finding panaceas like the elusive opiate free painkiller.
But would the suggestions of those like Friedman really be the answer? Are things like the FDA really just unnecessary burdens on the market which slow down drug development and cost lives in the mean time? Would the free market truly help society achieve important goals such the quest for ending medicine’s dependence on opiates to treat pain?
The Laws of Physics and the Existence of Risk
First we must ask ourselves, why is it that certain industries are more heavily regulated than others? Surely, pharmaceuticals, airlines, energy and finance appear to be regulated far more heavily than industries like lawn mowing, retail, information processing and tourism. Why does Wall Street have more government officials meddling about than Silicon Valley? Does the government just have it out for certain people based on unjustified bias? Is society simply prejudiced against these industries? The answer of course, lies in economics.
Certain industries are more prone to risk just as certain activities carry inherent risks. It is far more acceptable for me to go bowling while drunk than it is for me to drive a car while drunk. This is because driving, by virtue of the laws of physics, is an activity which poses a greater likelihood of serious bodily harm should something go wrong. If I make a mistake while driving 72 mph on the highway, the consequences could be far more grave than if I make a mistake while bowling. The former carries the risk of death to myself and others in the worst case scenario, the latter, perhaps a crushed foot, or simple property damage to the bowling alley should my ball roll in the wrong direction.
One does not need to have a background in economics to realize these simple truths. Not all activities are created equally. There are certainly ones which carry greater risks than others. And since all economic activity is simply human activity, this simple truth of the laws of physics applies to industry as well.
It is easy to see how the laws of physics can make selling pharmaceuticals and operating a natural gas pipeline inherently risky. A natural gas pipeline which is poorly maintained can lead to explosions, and indeed, such terrible events have occurred throughout history when such pipelines were not properly designed or maintained. When you buy ibuprofen or Tylenol from the drug store, you are placing your very life in the hands of people you have never met. Trusting that the chemical you are ingesting is safe, and unless you have a degree in chemistry and significant time on your hands, chances are most consumers never truly verify with certainty that the drug they are ingesting is actually Tylenol. You are trusting that workers at a drug plant that you have probably never been to have properly made the drug, and that they have properly packaged it, and that it will be safe for you to ingest. You truly are putting your life in the hands of someone you never met. And should something go wrong, you could easily die, as was seen in the tragic case of the people who died from ingesting pills which they thought were Tylenol, but were in fact cyanide which someone maliciously put into the pill bottle solely to hurt people. And while cases such as malicious poisoning of others could be said to be extreme rarities, it is still true that many people have gotten killed or injured from ingesting drugs which although made in good faith, still had some impurity, or latent side effect which escaped detection due to a mistake. While most of us view taking Tylenol as among the most mundane of activities, it is, under the laws of physics, perhaps one of the riskiest things we ever do.
And while the laws of physics clearly reveal the risks in things like drugs and natural gas, it is not as clear with heavily regulated industries such as finance or law. However, these industries also carry inherent risks. How does one know if their retirement manager is truly acting in their best interest? How can you tell if someone who is saying they are a lawyer truly is one? All one needs to do is to put on a suit, and put on a good act, and they could be able to fool even the brightest of us if they are talented in their deception. People like Bernie Madoff destroyed the finances of hundreds of people, yet, for decades he seemed to be nothing more than a simple financial manager, and one who was highly respected by the community as well. The truth is, it is very difficult to discern between the fauxs and the for reals in such industries, and this is one reason why confidence artists are especially fond of posing as attorneys or financial advisors. Simply put on a good act, and people will give you their money. And when you give someone your entire life savings you are engaging in an act which requires just as much trust and confidence as when you buy the Tylenol from the drug store.
Not to worry, say the laissez faire advocates, for consumers will be able to decide who to trust and who not to trust. In fact, the free marketeers go as far as saying that those who support regulation must take all people for fools. Surely consumers are not so stupid as to merely put themselves out as prey to the deceivers and the negligent individuals of the world. And perhaps the free market advocates are correct, people, though imperfect, are not stupid, and most rational adults are perfectly capable of assessing risk and protecting themselves and their families from these dangers. But would consumers react in such a way which would help the markets? What would truly be the economic effects of having a caveat emptor standard in the 21st century with all its complexities and risks?
How Consumers Respond to Risk
Consumers are not stupid, humans are naturally risk averse. We will not expose ourselves to danger unless there is a good reason behind it. Consumers, as economic actors, engage in the cost benefit analysis that all economic actors do. I could, in theory, strap myself to a rocket and shoot myself in the air in order to transport myself to work each day. However, strapping myself to a rocket would be an extraordinarily risky endeavor, by virtue of the laws of physics. And while it may actually transport me to work faster than simply taking the metro or diving my car, my chances of dying or incurring seriously injury en route, are much higher. Thus, most rational people will easily conclude that taking the metro or driving a car to work is the more favorable form of their commute to work. This cost/benefit analysis is deeply rooted in our psychology and our evolutionarily endowed survival mechanisms. The fact that my mere mentioning of taking a rocket to work seems ridiculously absurd to most of my readers is evidence of just how automatic and ingrained this sense is in our minds. The ability to engage in such calculations need not be taught, it is something which we all learn, something which is second nature to all sane people.
And while deciding on whether to buy a product may not be as obvious as my example of taking a rocket to work (which is admittedly and purposely designed to be an absurd comparison), this same, automatic risk aversion occurs in economic transactions all the time. If a consumer is wary of a certain product to the degree that its costs outweigh the benefits, they will forgo that choice in favor of a more sensible alternative. These automatic calculations are made with the information and signals we receive. Some information may be all but self-evident, such as the dangers of strapping yourself to a rocket, one does not need to be told that it is dangerous to know that it is dangerous, as anyone with enough life experience can immediately see the risks (and thus potential costs) involved. This is risk assessment which comes purely from first-hand knowledge, wisdom and intuition. However, in other instances we rely on information obtained from what we see and hear of what happens to others who engage in said activity. This kind of information is essentially second hand information, what we hear from others. And this second hand information in many circumstances can be so powerful that it can trump our primal intuitions of the risks of an activity.
Think for a second why buying pills at the drug store is seen as so mundane by most people despite the incredible risks. When you buy a bottle of pills, the powder or liquid contained in said pills could be anything, it could be something deadly, it could easily kill you, and you have essentially no way of knowing until you consume them. Why then do we do it? Shouldn’t our survival mechanisms make us more fearful? This can be answered by simply asking yourself: “why do I trust the pills I buy at the drug store?” Most likely, this is because most of us have never known anyone who has died from taking pills which were negligently made. Most of us have taken thousands of pills in our lifetimes, each one with those same risks, yet, chances are, nothing bad has happened (outside of the usual risks of a drug like Tylenol which can kill one if it is misused or taken in high doses or in combination with things like alcohol). Most of us have seen friends and family members take pills on a regular basis, without dying. The overall attitude and consensus we get from our culture, our media, our interpersonal communications, is that pills from the drug store are safe to use, and as long as you follow the directions on the bottle and don’t take too much, you will be absolutely fine. It is possible that someone pulled out of a deep rainforest, or who has lived in a bomb shelter under-ground her entire life, may immediately see the risks in taking pills, and be fearful of it. However, those who are exposed to our culture, and to the attitudes of others think nothing of it at all.
The information we receive from what others tell us and from what we see others doing is the most important factor in consumer risk assessment. We trust our senses, we trust what we see happening, and we act on information from such trustworthy sources. However, this trust can be shaken, or it can be impeded when we see great risk or a great unknown. And when we are confronted with such risk, we tend to avoid it.
Take Bitcoin for example, as an asset which can be purchased, it is perhaps no more risky than other investments and assets available to us on the market. But Bitcoin is new and different, a supposed currency which is nothing more than a mere programming script. We have no idea how impervious it is to hackers, and the volatile market of Bitcoin has shown that we have no idea how much it will be worth in future, or if it will be worth anything at all. As a result, the vast majority of the population has shied away from putting their wealth in Bitcoin. However, is Bitcoin any more risky than an online bank account? Surely they can be hacked, surely you are taking a risk when you put your finances into the dangerous world of the internet. However, while the first-hand information of these things may indicate that Bitcoins and bank accounts have similar risks, the second hand information presents vast differences between the two. The fact is that people trust online banks more, our culture trusts them more, we have more information about what they are and what could possible happen, while Bitcoin is surrounded by great unknowns. And even if the risks are the same, we feel much safer with online banking than we do with Bitcoin. Information is a powerful force.
So what happens when an industry releases a product which does cause harm? Well, we act on our information, and consumers will choose what they perceive as the safest path based on that information. Sales of Tylenol sharply dropped when the news got out that a family had been poisoned because the pills were maliciously filled with cyanide. This was in spite of the fact that only a few people died. This was in spite of the fact that the overwhelming majority of Tylenol bottles on the shelves across America were perfectly safe to consume. This was in spite of the fact that your chances of buying a cyanide filled bottle were virtually nil. The fact of the matter is that consumers acted on this powerful and terrifying information, and in their minds the cost/benefit analysis indicated that it was better to play it safe.
When consumer’s trust is shaken in a product they will avoid it, even if the first-hand information indicates that such risk aversion is excessive and perhaps even irrational. Consumers can be so rational that in fact it can almost appear to be irrational. We have all met people who are terrified of flying despite the overwhelming evidence that it is safer to do even than driving. Those people perhaps have never gotten over the feeling in their gut about what could happen should they go on a plane. We were all terrified of flying after 9/11. We were all terrified of investing after the 2008 crisis. We are all irrationally rational when it comes to such things, and when our trust is shaken in a product or an industry, we will avoid that industry, even if such fears are irrational in light of the data.
Why Regulations are Good for Business
The truth is that those industries which carry more risks often benefit from regulations, despite the fact that regulations may impede on capital and innovation. The laissez faire advocates display a fundamental misunderstanding of human economic behavior when they assume that getting rid of agencies such as the FDA would only benefit industries. The fact of the matter is that regulatory safeguards convey information to consumers, information that this product or service is under supervision, trustworthy supervision. Information which tells us that when we buy a bottle of pills, or invest money with a finance manager, we are not merely relying on their word, we have a 3rd party which has our back and is working to ensure that these things are safe.
When people do not have these safeguards, consumers have less information, and they are subject to more unknowns. And when they are subject to more unknowns, they will act in the rationally irrational behavior of avoidance in far higher levels than if those safeguards existed. And when people do this, it translates into decreased sales for the companies involved in those industries. And the decreased sales translate into decreased profits, and decreased profits tell us that one should question whether the inefficiencies of regulation are truly hurting businesses in the way we are told by the free market fundamentalists. Perhaps the pharmaceutical industry would not be benefitted should the Milton Friedmans of the world have their way. For if we repeal the regulatory safeguards, we are removing the level of trust a 3rd party overseer creates, which will undoubtedly generate greater uncertainty among consumers. And furthermore, by removing such safeguards, we greatly increase the risk of negligence occurring, and of more people dying. Even free market advocates acknowledge that such tragedies will occur in greater instances should regulations be repealed. And these events will undoubtedly generate even more uncertainty among consumers, and create even lower sales and lower profits for these businesses.
This is not mere conjecture, but in fact is evidenced by real world economic data. There was a time when drugs were not regulated by the FDA, and consumers knew that drugs contained risks. And in fact throughout the first half of the 20th century, there were tragedies of poorly tested drugs causing deaths among consumers. It was the Elixir Sulfanilamide tragedy of 1937, in which over 100 people died from taking an under-tested drug, which finally prompted Congress to pass the Food, Drug and Cosmetic Act. The law gave the federal government incredible authority over the marketing of new drugs, and was the birth of our modern FDA system. It certainly raised costs for businesses, and made it longer and harder for companies to develop and release new drugs. However, rather than retracting and stagnating, the pharmaceutical industry blossomed into the incredible powerhouse that it is today in the years and decades which followed. By the 1950s hundreds of new drugs had been developed and were flourishing on the markets. By the 1970s this number had become thousands. In the years that followed the pharmaceutical industry absolutely exploded, creating a revolution of new drugs which changed our society, saved millions of lives, and improved the lives of billions. And ofcourse, the pharmaceutical industry reaped billions in profits.
So why did this happen? If regulations hurt the pharmaceutical industry, why has almost all of the major progress in pharmaceuticals happened after 1938? Why has the pharmaceutical industry emerged from a small industry of dubious trustworthiness to one of the most powerful and profitable industries in the world economy? For those of us who understand economics, the answer is quite simple: consumers had greater reason to trust the industry, and bought more drugs, and this increase in profits was reinvested into making new drugs, and society as a whole was benefitted. The miracle of the 20th century pharmaceutical industry is one of the great achievements of mankind, and is a classic example of how business, profits and capitalism can truly benefit society. And this miracle of capitalism is largely because of the Food, Drug and Cosmetics Act and the greater trust and certainty it gave to consumers. If this law had not been passed, this miracle may not have occurred.
Now there are those who claim that our tort law system can pick up the slack and make up for a lack of regulation. And it is true that litigation, when combined with liability insurance, does act as a regulatory measure. However, the economic data clearly shows that court systems are not enough, they only deal with tragedies after they happen, and they do nothing to act as a prophylactic for potential harm. The mere fact that a company has liability insurance which dictates certain procedures does not carry as much weight in consumer’s minds as does the existence of a 3rd party, financially disinterested overseer does. Furthermore, many of the people who call for an end to regulation also call for damage caps on tort claims, meaning that liability insurers will have even less incentive to incentivize safe behaviors. In essence, the reduction of damage caps merely makes negligence cheaper, and when combined with a lack of regulatory oversight, causes even more harm, and increased the chances of accidents occurring. And consumers will, as they did in the days before regulations, respond to this increased risk by avoiding certain industries. Not to mention the inherent unfairness of such damage caps in that they allow companies to get away with negligent activities without having to pay for them.
Laissez faire advocates forgo economic science in favor of ideology when they claim that regulations are wholly bad for businesses. The free market fundamentalists expect consumers to behave in a way in which our psychology simply does not allow. They are asking consumers to incur greater risk, while at the same time continuing to purchase these products and services at the same degree. Laissez faire advocates should revisit the old adage ‘there’s no such thing as a free lunch”, because when one looks at the situations from an economic lens, it becomes abundantly clear that is what people like Friedman are promising us. To take such a position reflects poorly on their own understanding of human nature and of economic behavior. However, the much more likely answer is that they are simply ignoring relevant data in favor of ideological and rhetorical goals.
The truth is that certain industries, by nature of the laws of physics will carry more risks than other industries. And consumers are smart enough to realize those risks and adjust their decisions accordingly. When we remove the safeguards we are not merely removing approval processes and increasing the likelihoods of accidents and tragedies, we are removing an important source of information which consumers rely on. And it is the lack of information is perhaps even more damaging than the chance of accidents, as I have mentioned earlier, consumers will be rationally irrational and avert risk when they are faced with unknowns, even if statistically their chances of incurring harm are not great. It is the existence of information which makes markets possible, and when information is decreased, it will reflect in people avoiding certain activities. By increasing information through regulation, many of these industries have seen a substantial rise in sales and profits. And by increasing profits while simultaneously increasing innovation and development of new products and services, we are creating a better situation for all parties involved, businessman and consumer alike. Regulations are good for business.
Those of you following the news have probably heard that the implementation of the portion of the Affordable Health Care requiring large employers to provide health care has been postponed by a year so that its effects and feasibility can be further assessed. For those of us who follow constitutional news, this should not be of any surprise. This action is just the latest in many which reflect a new and perhaps disturbing trend in constitutional law where by the executive branch is exercising more control over what laws it chooses to enforce.
In a perfect world, laws are to be created by the legislators, and the executive branch proceeds to enforce said laws. The executive branch is really not supposed to have any discretion in enforcement of the laws not within the powers and duties outlined to it by the legislator. But one of the trends that has been developing under Obama’s administration is the executive branch exercising greater discretion in which laws it enforces and how. I see this as probably the most troubling development for the constitution under Obama, more so than perhaps those activities associated with drones or the NSA. However, even more troubling is that it is hard to blame this completely on Obama as there does seem to be a dynamic developing where this is actually becoming a new norm.
Obama certainly has no trouble picking and choosing with laws. When he took office, he notified the Justice Department, that regarding medical marijuana laws, the federal government would not arrest individuals who, though in violation of Federal drug laws, were legal under state laws (he then subsequently changed his mind, and then changed it again, and now with several states having outright legalized it, there is a degree of ambiguity coming from the administration about what it plans to to). However, federal drug laws, which were enacted by the legislature, clearly say that the federal government should have been enforcing those law against medical marijuana facilities, and that under any interpretation of the constitution the Executive branch not only has the ability to do this, but the obligation. Certainly the Executive branch has the discretion regarding funds and feasibility of enforcement of the law, if enforcing a certain law (like the prohibition of marijuana) carried with it high costs and logistical impracticability which took away from the government’s ability to enforce other, more important law, the president can certainly order executive officers to put more priority to the enforcement of those more important laws. However, for a president to refuse to enforce a law purely out of discretionary, non-logistical reasons, it is really an act of the executive branch legislating, and hence constitutes an abuse of discretion by the executive branch.
But when Obama did this, nobody cried foul, liberals are sympathetic to medical marijuana because they generally do not support the harsh drug laws, conservatives are sympathetic to it because they generally support state’s rights. Because of this, nobody actually tried to challenge Obama’s decision, because most people believed that it was in general good policy. But the responsibility of creating good policy is supposed to be the legislature’s. In other words, in the perfect world, if our federal drug laws regarding marijuana are bad policies, congress should go back and change the laws to correct these imperfections.
Then of course a series of laws were passed by Obama, including the banking regulations under Dodd-Frank, and the Affordable Health Care Act. Both of these laws were passed rather hastily, and done under the understanding that congress would come back and amend it after committees could fully explore their effects. Dodd-Frank, in many ways, was simply regulation for the sake of regulation and done because after the 2008 crisis, everyone was in agreement that something had to be done to keep this from happening again, and both conservatives and liberals agreed that a reform in the banking laws was the best way to go (even if they disagreed as to what reforms needed to be done). Though Dodd-Frank does have some good regulations, there are other parts of it which are contradictory or unclear, and certainly needed to be addressed before enforcement. Obamacare too contained many complicated, sometimes contradictory provisions. The Senate passed it so that it could get these reforms in place, but with the understanding that obviously it would have to be amended in the future to tinker out some of its problematic features.
But then, in 2010, a wave of congressional elections brought in a new crew of tea party legislatures to Washington. These individuals had a radical agenda, and sought to bring about radical reform. Their new attitude was not merely to disagree with policies they didn’t like, but to do everything in their power to repeal them. As such, the use of the filibuster has been greater under Obama than all the other president’s in history combined. Washington came to a complete gridlock, and could barely even pass a budget without it turning into some historic showdown.
This created a problem for Dodd-Frank and Obamacare, because the gridlocked Congress has been unable to make any amendments to them. Instead, the tea party legislatures have called for their outright repeal, and rather than see them corrected to function better, they simply have refused to compromise to any degree, completely holding up the entire process and demanding that they get their way. However, the behavior of these obstructionists has been quite unreasonable, the Democrats, as predicted, have not caved in to their radical demands, and instead both groups have been in a perpetual staring contest to see who blinks first.
We are now dealing with an instance where our legislature is quasi-suicidal, as members of congress have shown their willingness to plunge the nation into great harm in the name of demanding reform. Things like the sequester, which had for decades been used as a simple incentive to encourage agreement (as the understanding was that congress would never be stupid enough as to never let this actually occur) have actually now gone into effect. Budget showdowns have pushed the government to the brink of an unnecessary default. For whatever reason, the tea party believes that there is a level of great urgency to this which justifies obstructionist behavior that actually brings harm to the nation in the short term, perhaps under the belief that in the long term it will be good for the country and constitution.
But there is very good reason to believe that in the long term such behavior will be bad for the country and the constitution. Unable to get Dodd-Frank amended, the executive branch has been withholding enforcement of nearly 2/3 of its provisions. This is because the executive branch realizes that many of these provisions in Dodd-Frank will bring harm to the economy if they are enforced, and should be amended to be more functional. In a perfect world, the legislature would set up committees, call in experts and economists and figure out the best ways to amend the law so that it does not cause any harm. However, this is not a perfect world, instead the legislature has refused to do any of this and instead put large sweeping demands on the country. Thus, something as simple as amending banking regulations, is used as collateral for things which are wholly unrelated like getting rid of the EPA. This is of course because the obstructionists see themselves as under a duty to bring about wide sweeping change, and that they are not just against individual laws they think are bad, but against what they see as an entire idea of what the government should be doing. Thus, seemingly unrelated laws are seen as the same in their mind, as they both are part of what they view as the enemy. Because they are both part of the obstructionist’s idea of what “the enemy” is something like financial reform is seen as being the same as environmental law, despite the fact that they are two wholly separate things.
Now the ideas of the obstructionists are in fact absurd, and dangerous. And the executive branch realizes this. However in the meantime they are forced to deal with the fact that they are dealing with a congress which actually wants to see bad laws enacted which will hurt the economy, because the obstructionists unreasonably believe that it will somehow help to change the tide and bring about a radical new era in American history. In reality the most likely thing it will do is only make things worse.
So, stuck between a rock and a hard place, the Obama Administration has chosen to take matters into its own hands, and instead simply refuse to enforce laws which it knows will hurt the country. He has done this with a number of laws including Dodd Frank and now with the latest decision regarding Obamacare. At the same time, the Administration is using its regulatory powers to the fullest extent, as the gridlock in Congress prevents the legislature from doing its normal job of being able to react to changed conditions and alter government practices to ensure that laws are being well made. Obama has been using the power of executive order and regulation to the fullest extent that he can.
Just like Obama’s decision to not enforce federal marijuana laws, the actions regarding Dodd-Frank and Obamacare have not been met with scorn by either parties, despite the questionable legality of such actions. Conservatives are happy because they simply don’t want Dodd-Frank or Obamacare to be enforced, and so any delay in enforcement is seen to them as a vindication that these laws are bad. Liberals don’t want to see the country be harmed needlessly because the government is forced to enforce bad policies. And it is hard for any reasonable person to disagree with this, after all, nobody wants to have bad policies being enforced.
But this doesn’t mean that this dynamic is a good thing for the country. The executive branch has the important job of enforcing the laws made by congress, but as the commander in chief, the president also has the job of protecting the country from harm. If we find ourselves in a position where the legislature is actively trying to harm the country, the president is in a constitutional conundrum, and one can argue that whether he enforces harmful laws and arbitrarily hurts the country, or if he refuses to enforce certain parts of certain laws, he may be acting unconstitutional in either decision. Certainly, enacting purposely harmful laws is a terrible and dangerous precedent, but refusing to enforce laws is also a dangerous precedent.
Because now what is happening is that congress is actually undermining its own authority and giving more authority to the presidency. A future president could decide to radically up the enforcement of federal drug laws, or outright refuse to enforce them at all, making it so that citizens could have their legal duties and situations changed purely by presidential decree, rather than by legislation. A future president could decide to enforce all of the provisions of Obamacare or Dodd-Frank, or perhaps not enforce any of them. Even more scarily, a future president may decide to halt Planned Parenthood, or the EPA, or even the military, all without any congressional approval. In effect, you would have the president legislating solely from his own discretion, which would destroy the balance of powers constitutional governance is meant to bring about.
And finally, the day may come when the president actually changes the substantive portions of a law solely through his presidential powers. A time may come when Obama decides that it is simply not worth it to wait for Congress to change the terms of Obamacare or Dodd-Frank, and he may simply expand his regulatory powers to change the actual rules of the acts themselves. This would be devastating for democracy and put us into a very serious and dire trajectory.
The obstructionists see themselves as trying to restore constitutionalism and correct what they see as a deviance from the principles and rule of law on which this country was based. However, thus far, they have only succeeded in creating a dynamic whereby we only move further away from that ideal. If the legislature is obstructionist and refuses to participate in its role in governance, the executive branch is faced with the decision of either letting the whole thing fall to ruins, or to take control and maintain stability. And the executive branch will always choose the latter option, and indeed, the latter option is probably the best and most constitutional decision given the situation it is placed in. However, for the long term health of our Republic, this is a dangerous road to go down, and I fear that it may constitute the end of American Democracy. However, rather than having it turn into the free market, decentralized paradise that the obstructionists hope for, in all likelihood it will instead turn into a dictatorial, centralized direction whereby even more power is consolidated into the executive branch, rather than less. This coincides with a deepening divide among the public, which is a recipe for disaster, as we are facing the possibility where a presidential election can radically shift policy from one extreme to another depending on who wins. The genius of the balance of power created in the constitution is that it makes it so that no matter who is elected president, there are enough checks to power that it will not result in a radical shift in either direction. However, if that balance is lost, then we face the possibility that a radical president could decree his way into transforming us from a Republic to a much riskier form of government. If the legislature is able to come to its senses, and once again be able to work together and cooperate to make and amend laws, then perhaps this will only be a blip in American history, and will not constitute a long term trend. However, if a gridlocked and dysfunctional congress becomes the new norm for congress, then I predict that we will see new norms develop for the presidency as well, and future generations may live under a government which is far different, and far more dangerous than the one which we grew up with. Either way, this is something which all citizens should be concerned about.
Ahh the widget, anyone educated in the English speaking world has probably heard of Widgets. The fictional abstract every good which is to commodities what Blackacre is to real property. For many decades widgets have been used to teach students about mathematics, economics and contract law and perhaps a variety of other concepts too.
Widgets are an important teaching tool which allows educators to convey concepts which can often be complex in terms which are easy and understandable to the student. However, there comes a time in everyone’s life when he or she must break free from the Widget economy. The problem with widgets is excellently summarized on Wikipedia’s widget page where it references a scene from the movie “Back to School” where Rodney Dangerfield’s character Thornton Melon is sitting in an economics class where a classic widget example is being given. The normally crude Melon makes an important insight to the professor, where he criticizes the hypothetical business model for making unrealistic assumptions about business, including the use of widgets. Rodney Dangerfield points out that when using widgets one is not taking into account things like whether they are fungible and that every industry has certain characteristics which need to be taken into account. By the way, here is a link to a youtube clip of the scene which I think is a superb criticism of how academic economics can be completely out of touch with real world business practices.
I like that scene from “Back to School” a lot because it excellently captures the problem of widget economics, the professor, speaking in an arrogant and pedantic British/Aristocratic accent confidently outlines his model of a fictional firm. Meanwhile, the unsophisticated, yet business savvy character played by Dangerfield tears into the professor’s lecture, pointing out all of the unique costs and considerations that go into any kind of business model which the professor seems to completely leave out. This of course points to the very real fact of how many of the so called experts in business who sit in their academic halls and think tanks, have no actual experience in running a business themselves, and can be profoundly ignorant of many of the real life pressures which go into running a business. I believe that there is something to be said for the fact that there are times when businessmen, living and working in the real economy on a daily basis, can have real insights into how economies work, insights which are often lost on the highly educated, yet experientially deficient academics. And the problem of the widget economy is one such example.
Now when I use the phrase “widget economy” or “widget economists”, I am not referring to any one person or any one thing, but rather, the attitudes about economics seen in so many people which reflects the idea that the real economy functions like widgets do. And in so doing, one becomes a widget economist when they seem to give the impression that the real economy, is in fact a widget economy. The widget economy is a world where every industry functions the same way. The laws of supply and demand affect them equally, the strategy for success is the same as well. Widget consumers are all the same as well, always pushing it towards market clearing price, they never miss a beat. Goods are all fungible to the same degree, and elasticity is uniform unless the professor purposely changes it. The widget economy creates the impression that all industries work the same way. Now I know what you are all thinking, surely no one actually believes that all industries work the same way and have the same pressures. And that may be true, even the dullest students can probably figure out that there are real world pressure which can affect the functioning of an industry in various ways.
But the problem of widget economics is not that it causes students to actually believe that all industries are literally the same, the problem of widget economics is that it creates a presumption in people that the normative model of a business should be that seen in widget economics until proven otherwise. And thus, when one is confronted with an industry that appears to have things like heavy public involvement or social control, the presumption which is created is that this is the result of meddling public officials who are ignorant in economics, and that if you simply left it up to the supply and demand of the market, it would be able to figure itself out. But this presumption is not only wrong, but dangerously wrong. I am a proponent of the belief that all industries are unique, and that there are always the real world considerations involved like those mentioned by Dangerfield’s character. And while the widget model firm is a helpful tool in teaching general concepts, our first step at looking at an industry should be from a descriptive stance, rather than a normative one.
Ultimately economics is a behavioral science, and in the behavioral sciences, when you see a behavior or practice which seems to be nearly universal among a species, our presumption should be that it has some functional purpose behind it. To say that things like the welfare state, or regulation of finance, or public involvement in healthcare are simply the result of foolishness which has no real world benefit or purpose would be to essentially say that it was a random fluke. Now it is true that governments are certainly capable of making mistakes and crafting policies which are bad, as all living entities have the capacity to produce some behaviors which are simply mistakes. However, the examples of practices I just gave are nearly universal in advanced market economies around the globe, which would put them absurdly outside the margin of error if they were merely flukes which harm society and were put into place by the very institutions whose job it is to protect society. Even the excuses put forth by Public Choice Theorists for these things by blaming them on some flaw in democracy do not seem to be adequate, for one, many of these things existed in non-democracies, and also because democracies have shown to have the ability to self-correct for bad policies and change laws over time if they are truly causing harm and bring no benefits. Now I suppose that a reasonable person after a thorough investigation could come to the conclusion that these are all just flukes, however, I still believe that the most prudent and scientific way to go about this is to assume that there is some underlying functional explanation for a near universal policy, and that this assumption should only be rebutted only after clear evidence is presented.
Widget economics on the other hand goes in a reverse manner, and says that industries should function like the theoretical firms presented to us in class, rather than the functioning observed in the real world. I find this approach to be horribly unscientific. There are some industries which simply shouldn’t be left up to the traditional widget paradigm. I will now delve into 2 industries which are constantly criticized by widget economists for not following the rules: health care and education. The widget economists proclaim that if we simply left these up to supply and demand that everything would work its course. Hopefully I can show why that presumption is utter nonsense.
Healthcare is an industry which has heavy public involvement nearly all over the world. As I pointed out this near universal practice of social control of healthcare should put it far outside the margin of error of mere a fluke which violates the laws of economics. But the overall trend of the past 100 years has only been more public involvement in healthcare, rather than less, even the free market bastion in America recently enacted more control over it. So what is it about healthcare which makes it so likely to be controlled?
For one, demand for healthcare is nearly completely inelastic, for obvious reasons. When the alternative to not getting a certain procedure done is death or severe suffering, most people and their families see cheaping out on their own survival as a non-option. And so because of this most people will do whatever it takes to pay for vital expenses, including draining their entire life savings or incurring massive loads of debt. This makes it difficult for the traditional rules of neoclassical economics and marginalism to do their trick. Under marginalist theory we are told that eventually the price of a good can go up to a point when many potential buyers decide that it simply is not worth the opportunity cost and opt out, which will eventually bring it to an acceptable equilibrium. But healthcare is not your average widget, the price of healthcare can become enormous and demand does not seem to become any less. True, there are many impoverished people in America for whom healthcare is simply too expensive, and they simply cannot afford it. But that is simply a matter of physical impossibility, where the individual who does not physically possess the amount of money needed or the means to borrow it is simply unable to purchase it. Virtually anyone who can buy healthcare within the realm of physical possibility does so.
Now one can look at this and say that this is simply a problem of supply and demand, and that healthcare purchasers outnumber healthcare providers to a degree which causes these high prices, and that in a free market, you would simply get more doctors coming into the system who seek to profit from this disparity, and eventually an equilibrium will be reached. Now perhaps in the long run, increased healthcare providers and innovations in technology can do this, but I believe that it could take many years for this to occur. And for reasons I will describe later regarding education, the price of medical schools has risen to a degree that it actually acts as a prohibitive measure. Medical schools are incredibly expensive to operate, much more so than almost any other field because of the equipment, drugs and subjects that are needed to teach medicine and the time it takes to train a doctor so that he is competent to practice medicine. Many universities are hesitant to open medical colleges because the upfront costs are so expensive that even if the long term operation would bring profit down the road, the profits are so far down the line, and the upfront costs are so expensive, that raising the capital needed is all but impossible for many schools. Of course one possible solution could be to simply get rid of medical schools as a requirement for licensing, and allow anyone to be a doctor. Even though this solution is enthusiastically supported by many widget economists, the practicality and desirability of such a scheme is questionable. Much of the regulation in industries like medicine or law has come from within the industry itself with things like Medical Associations or Inns of Court. If consumers had to navigate through a market where many practitioners are all out charlatans, it hurts public trust in the industry itself, discouraging innovation and entry into the field. Another possibility is that consumers would eventually learn to only go to people with the letters “M.D.” or “D.O.” following their name, and that hospitals would only hire such individuals to be doctors, in which case you are simply left with a market situation which hardly differs from the one we have now.
Now one could say that this situation of chronic high costs is simply a result of price fixing by governments, and that by enacting legislation which puts caps on the charges for procedures, it raises the opportunity cost for those seeking to profit by becoming a healthcare provider, thereby creating chronic shortages and high costs which are never abated. Now, for one, if that were the case, we would expect to see prices inflate even more if there were no price restrictions in place, which could then make even more people unable to afford healthcare. Or perhaps, given its seemingly ceilingless inelasticity, that society will just end up spending more on healthcare in total (and this higher costs of healthcare to GDP does seem to be observed in countries with a more “free market” healthcare system). However, while I do think that there may be some truth to the notion that price fixing plays a role in all of this, I believe that this practice of fixing prices actually emerges from market mechanisms themselves, and not governments. And here’s why.
Most healthcare patients end up purchasing insurance to help pay for their healthcare costs, and indeed, healthcare insurance plays a huge role in the US healthcare system. But healthcare insurance is not your average insurance provider. Insurance is generally an industry whereby the insurer agrees to cover a policy holder to pay for whatever unexpected costs could arise in the future which the parties agree to be covered. Car insurance covers possible auto accidents, casualty insurance covers costs of property damage, life insurance gives a payment upon unexpected death (unless of course it is whole life insurance, but lets not get into that). The insurer makes a bet with destiny that the person they are covering will not incur these costs, and so long as a large enough amount of policy holders do not incur these costs, the insurer walks away with a profit. The insurer makes his money, and the policy holder gets the peace of mind which comes with owning insurance, a win win situation which even a widget economist can understand.
But health insurance is different, though it does cover unexpected emergency costs like illness or accidents (however you can rest assured that they word the policy to get out of as many of these as legally possible), health insurance also covers routine medical procedures like check-ups and prescriptions. If the health insurance provider is making a bet with destiny, it is one he will lose, because all he is doing is essentially acting as a payor for expected expenses, rather than an actual insurer of unexpected future costs. So how does the health insurance provider make money? And more importantly, why do people even bother to purchase health insurance to pay for things like routine check-ups? The answer of course lies in the relationship between healthcare providers and insurers. Healthcare insurers do cover unexpected health costs like normal insurers, however they also cover routine costs for which there is seemingly no way to possibly profit. As such one would expect this to really have no profitability, but yet health insurance is a gigantic and profitable industry. This occurs because healthcare providers and insurers make deals regarding policy coverage, and in these deals they do a variety of things including agreeing on the costs for certain procedures. So for any given procedure such as a colonoscopy, the insurer and provider agree on a fixed cost which the provider will charge for each person that they give a colonoscopy to who is covered by that policy. These agreements cover a wide range of procedure and thus create an example of endogenous price fixing within a free market, with all of the problems which price fixing can bring. The rates agreed upon are usually favorable to the insurer, so that he pays lower costs, and to make up for this below market clearing price put in place, the health care providers charge higher rates for the non-insured. Thus, the uninsured routinely pay higher prices for healthcare and effectively subsidize the insured. And because of this price disparity, the incentive to purchase insurance is incredibly high for the consumer, because otherwise they are needlessly paying a higher price.
The insurer then, by cutting down on routine sunken costs via pricing agreements is able to profit from the normal insurance “bet with destiny” paradigm. A large percentage of the price the purchaser pays for health insurance is simply going towards the discount for routine healthcare, while the rest of the policy price goes towards the normal insurance paradigm of pooling money with the anticipation that only a few of the policy holders will incur major costs and thus give a profit to the insurer. But wait, there’s more. Because pricing agreements with healthcare providers can be tedious and a lot of work, insurers and providers tend to develop reciprocity agreements whereby the provider agrees not to accept any other kind of insurance, and the insurer agrees not to enter into agreements with hospitals run by other health care provider corporations. This in effect keeps insurance local in its scope, and makes it so that in a city which could have 100 hospitals and clinics, only a portion of them are able to accept the average policy holder’s insurance, thus limiting the place where they can obtain care. This process localizes healthcare to a very fine degree, those people who called for lifting laws which ban interstate healthcare providers are perhaps ignorant of the amount of good this could bring. Because even within a state, one can find that the healthcare providers and insurers on one side of a state, are completely different than one on the other side of the state. And even if one could have interstate run companies, the chances of it increasing the options for a patient or decreasing the costs do not seem likely.
So the result? Treating healthcare like a widget does not seem to reduce costs. Because healthcare is so expensive due to its virtually inelastic demand, most citizens need to purchase health insurance to be able to afford medical care, and they often do so through their employers who purchase complete packages of coverage. And because so many citizens have health insurance, the healthcare providers will need to accept insurance in order to be able to actually get paid. However, due to unique nature of health insurance which sets it apart from other forms of insurance, insurers only agree to pay for healthcare costs if they can get the provider to agree to a set of conditions, which include pricing agreements and restrictions on accepting other brands of insurance. This price fixing lowers the profitability of healthcare, which not only creates the economic calculation problems of price fixing, but also forces them to raise rates for the uninsured, thereby forcing more consumers to purchase insurance and making providers even more dependent on the restrictive agreements of insurers. Thus, consumers are left with chronically high prices, which only seem to go higher yet no increase in supply as the costs of entry and pricing agreements impede this market mechanism for working as neoclassical economics tells us it should. The free market and the widget laws of economics do not create better quality of care or cheaper prices as we are told.
This creates a very real argument for things like single payer insurance, or schemes seen in foreign jurisdictions whereby the public directly subsidizes providers by paying for things like medical equipment and drugs, which reduces the sunk costs for providers and allows for cheaper costs to consumers. In many of these countries, procedures can be cheapened to a degree where many individuals do not need to purchase insurance to avoid insolvency, major procedures and unexpected costs from accidents may only amount to a few thousand dollars, as opposed to tens of thousands of dollars like what we see in America on a regular basis. Now, a smart-alec might say that these are simply tax payer funded schemes, and that I am just trying to sell you a free lunch. But that smart alec might need to contend with the fact, that even when public spending is taken into account, the US still spends far more per capita on healthcare than those countries with those schemes, and US citizens are not receiving any higher quality of care or better health out of such a disparity. It could seem to reason that the cause of this may be that simply leaving healthcare up to a free market, due to its unique pressures, causes higher prices because of the inelastic demand, lower doctor to patient ratios, and reciprocal agreements between insurers and providers.
One could also take a deeper view at all of this and say that the public control of healthcare around the world could actually be a form of rationing. Rationing usually occurs when there is significant scarcity of a vital good, to the degree that simply leaving it up to supply and demand would alienate many parts of the population, which of course degrades social cohesion and risks the costs of disruption from things like inequity aversion bubbling up. If that is the case, then clearly we are suffering a chronic underproduction of healthcare providers, underproduction which free markets seem to be unable to cure due to the unique dynamics. But why is there chronic underproduction? Why aren’t we seeing the free market produce the amount of doctors needed to bring prices back down? That may actually have something to do with the nature of education, another industry which widget economists seem to completely misunderstand.
Education is another area which is steeped in public control, and one where widget economists often claim that simple supply and demand will make it better. But once again, the widget economists are mistaken. Education has long been a non-profit industry which has been funded and even run by public institutions for centuries. While it was once a luxury only enjoyed by the rich, in recent decades it has become greatly more accessible to the public. However, this has come with the added cost of higher tuition rates. Widget economists complain that public subsidies are to blame, and that if we simply left it up to the free market tuition rates would drop, and we would get all the benefits widget economics promises, including perhaps a greater number of medical school graduates to alleviate the doctor to patient ratio which plagues healthcare.
But these claims are a misunderstanding of what actually motivates educational institutions. It would be laughably false to say that institutions like Harvard, Cambridge or Stanford are motivated purely on profitability. The truth is that one of the reasons for education’s unique traits lies in this dynamic. It is true that all education institutions need to have monetary inputs which at the very least cover the costs of outlays if they wish to survive. However to do this they must be able to attract students, and of course they don’t just want to have students who pay their tuition, they would prefer to have remarkable students who will not only give money to the school in the future, but could serve as examples of outstanding alumni to attract to future students. Indeed, institutions like Harvard Law School probably receive more benefits from simply saying that they boast graduates like Barack Obama and Mitt Romney (the two major candidates of the last US election) than they do from those alumni who modestly practice law and quietly give donations in the future.
Of course to attract such students they would want to have a good reputation. And for most schools, that means having noted alumni and having noted faculty. In order to attract noted faculty, a school like Harvard wants to get the best and brightest it can, respected scholars who are superstars in their field. But to do that of course they must be able to give that person a good deal, because if Harvard can’t give someone the likes of Joseph Schumpeter or Francis Crick a good deal, then Yale, or University of Pennsylvania will certainly give them one. And so, in this competitive bidding competition between educational institutions, the universities try to attract highly rated faculty by giving them good pay and benefits, and also giving them academic freedom and a position which allows them to continue their work which made them great in the first place. And while there are no doubt, economists and biologists out there who are every bit as capable of teaching students as people like Schumpeter and Crick, the fact is that these individuals simply aren’t as famous, and by attracting famous faculty these schools are paying an extra price simply because their celebrity attracts higher demand among employers. This of course raises the costs of the university, which then either come from increased tuition or getting donations and grants.
Of course, raising the rates of tuition poses a problem for the college, as if the prices become too high, they may make it impossible for individuals who are of true genius, yet come from families of limited means to be able to attend. Many individuals like Bill Clinton grew up in poverty, yet listing Bill Clinton as an alumni, and the donations and exposure he brings has certainly been a worthy investment for Clinton’s alma maters of Georgetown University and Yale Law School. So, in order to bring about the best and brightest, schools seek to give scholarships to potential students, so that the truly remarkable people like Bill Clinton are able to afford attending that school, which in the long run brings more benefits to the school. However, by giving scholarships, the schools are forced to raise tuition rates even higher, so that those students from wealthier families are effectively subsidizing those who come from limited means.
And so, with the combination of obtaining highly rated faculty and bright, beneficial students, the university finds itself having to raise tuition in order to meet these effective sunk costs. And as demand rises and more people apply, the schools are faced with the ever increasing task of choosing just who to let in, after all, an individual may come from money, but that doesn’t mean that he or she will be a truly remarkable alumni. Powerhouse schools like Yale or Harvard of course are able to set the bar high, by imposing strict GPA and SAT requirements, which lower the glut of applicants. But this leads to a trickle-down effect whereby up and coming institution try to step in and offer deals to individuals who have good grades, yet have enough money that they would not be able to receive scholarships from higher rated schools. And so, with time, the institution hopes to improve the caliber of its alumni, and by having distinguished alumni they will attract distinguished students, and distinguished faculty, which overall can raise costs. And this tends to make it so that higher rated schools charge higher tuition, which makes them even more dependent on having to use scholarships to attract those diamonds in the rough. Lower rated schools have a lower applicant pool, and therefore have lower tuition, as more applicants can afford the rates without needing a scholarship, and with their extra cash they can hope to attract highly rated faculty, and provide discounts to get distinguished alumni. However, if they succeed in getting distinguished alumni and highly rated faculty, the trend is almost inevitable that they will have to charge increasing rates for tuition.
Now imagine for a second that Harvard instituted a new policy, whereby they would accept applicants with no regard to scholastic achievement, instead simply left it up to supply and demand and awarded spots to those who could bid the highest. At the same time, Harvard announces that in choosing faculty, it is only going to hire those who are willing to work for lower wages, and that as long as they can teach, if they are willing to accept the salary offered, they will get the position over a comparatively more famous, yet higher in demand scholar. Ok, now that you are done laughing, I think we can all agree that such a policy would greatly hurt the integrity, quality and esteemed position of Harvard if it were to seriously undertake such a policy.
This shows that academia simply has non-profit quality incentives built in, which make the widget model virtually inapplicable to this. And in fact, when one looks at those for profit schools in America, like Phoenix University, it is shown that not only do students pay vastly higher tuition, they also incur greater amounts of debt and get lower paying jobs than those who graduate from traditional non-profit universities. Now some of this may be because of simple cultural bias towards traditional institutions, but then again, cultural bias does create real costs which must be considered when dealing with such things.
So because of this unique process, many universities are non-profit. And being non-profit, they can actually be inhibited in opening things like medical schools, which as mentioned earlier have enormous upfront costs that make them prohibitive for smaller colleges. This shortage of medical schools of course limits the number of medical school positions available in America, which not only raises costs via supply demand, thus further affecting opportunity costs for potential doctors, but also limits the number of med school graduates. And this small number of med school graduates of course feeds the scarcity dilemma that worsens the absolute clusterfuck that is healthcare which I described earlier.
The solution? Well for one, society has already found ways of ameliorating this, and that is to have public funding for many schools. This can come in the form of direct grants and subsidies, opening up state run universities, and of course the taxation subsidy of allowing them to claim non-profit status, the mechanisms of which I described in a prior article regarding charities. Though, there are individuals such as myself who question whether the current regime is enough. Education is greatly important in our society, and I consider it to be a public good to have a higher number of educated persons in a society. Furthermore, given the economic situation in the developed world, education is practically a must for most people, if they want to have any hope of obtaining job positions which give them a livable income and opportunity advancement. This of course makes education like healthcare in that demand is quite inelastic, which lowers the chances of prices ever going down if the government were to truly “get out of education”. If individuals under the current semi-subsidy situation are already willing to go into debt by as much as $300,000.00, I find it extremely hard to believe that they wouldn’t be willing to pay that amount or even more if education was left purely up to market forces with no public funding. In other words, not only would education prices not go down, more people would be denied access to education and universities may have to sacrifice quality in order to meet the increase costs associated with lack of subsidies. That to me, sounds like a rather stupid idea, which is why I oppose those who wish to get the government out of education.
In my opinion, our society would probably benefit from even more public input into education, so that we can increase access to education for people. Also, when done in combination with social control measures for healthcare, funding medical schools and boosting the numbers of graduates can be a great way to alleviate the healthcare problem.
Of course, the widget economists, upon hearing such a proposal, will once again come out of the woodwork and tell me that in order to do these things I will need to get that from taxation, and since there is no such thing as a free lunch I’m just stealing capital from the private sector which hurts economic development. Well for one, I could say that, what proof do we have that private individuals are actually going to put that money for better use? Surely some may, but there are also many idiots out there who are going to waste their money on things which bring society no benefit. But that generalized criticism of the concept of opportunity cost is for another day. The best response I would give would simply be that by doing things like boosting med school graduates and increasing the numbers of educated people, I am actually lowering costs for a variety of services including medical care since you are allowing more people to enter the field. And at the same time, by lowering the costs of healthcare and education costs, you are really reducing sunk costs which are doing very little reinvestment in the first place, and by lowering sunk costs, you are in fact freeing up capital for individuals to invest in other, more productive places, which, if one believes in the Laffer Curve (as most widget economists do) would mean that we are in fact boosting wealth and growth to a degree which more than covers any increase in taxation.
I think it still is quite reasonable to suggest that from the reasons I laid out in this essay that one can make a very logical and sound argument that the near uniformity of social control and public funding for industries like healthcare and education are not mere flukes caused by the economic ignorance of lawmakers and voters. But rather, this apparent phenomenon is because these actions do serve a functional purpose which benefits society and the markets. Markets have a tendency to promote beneficial policies which are not confined to mere market mechanics, but can spill over into political entities as well. After all, the voters who have supported these policies are the very same market participants who are driving the economy in the first place. They are the market. And unlike the pundits who sit in the Cato Institute and chastise the world for not “understanding” their economic theory, these people actually know what it is like to work in business, and they know that business is not something done on a chalkboard in a classroom, it is something done in the real world, and that means it comes with all the uniqueness, specificity, randomness and socio-cultural phenomena which can arise in the real world. Economics would be better served if it could shed its skin, loosen up and join the real world. Rather than being the stiff, rigid and overly academic field characterized in the scene from “Going Back to School”, it should be one which is entirely more attuned to the real world. And that means being open ended, that is to say always willing to question ALL the assumptions imbedded in its models, and to not close itself in to unproven absolutes. It should also join the rest of the sciences, and be more open to the concept that when confronted with a disparity between theory and real life practice which is outside the margin of error, our first presumption should be that the real world is right and the theory is wrong, rather than the other way around.
And at the center of all this is to destroy the problem of widget economics. Now don’t get me wrong, I still think widgets are an invaluable tool for teaching students, and I don’t advocate getting rid of them. However, I think that when teachers use widgets for conveying concepts to students, they should include an important caveat: widgets are not real, no two industries are the same, every industry has its own unique pressures which can change the dynamic of supply and demand, and there really are no goods which are going to function exactly like a widget. Either that, or we should just do what my friend at Unlearning Economics thinks, and simply throw neoclassical economics out the window and start all over again from the basics.
One of the more important psychological concepts for those interested in public and economic policy is that of inequity aversion. Inequity aversion theorizes that human beings have a natural tendency to sway away from outcomes which they perceive as unfair or inequitable.
I find this to be fascinating because I believe that the root of trade and money can really be found in equity(note: by equity I mean the concept of fairness, not any of the other uses of the word). Early trade and barter arose with individuals trading one good for another (or a good for a service). Such calculations really had a rule of thumb approach to them which could be said to be rooted in the individual utility which was driving the actor’s decisions. However, with the advent of standardized commodity units (like furs, bushels of wheat, cowry shells, etc.) a concept arose whereby people could visualize in their minds what was truly a “fair” deal or not in a way which went beyond a single transaction and could apply to all transactions as a gauge of real value. Cultural norms could help indicate to someone just how many furs were equal to a bushel of wheat, which added a new dimension to trade beyond simple conceptions of utility.
If one is to take a pure utility argument to an extreme, then the buyer could pay $1 million for a soda simply because it increases his own utility. Of course nobody from a modern society would even consider such a deal, even if they really wanted to have a soda, because we know that $1 million is vastly too much to pay for such a good. The idea of someone paying $1 million for a soda is absurd because any rational person could tell you that paying $1 million for a soda is an unfair deal. But besides the fact that such a transaction simply doesn’t make economic sense, there is also a moral realm to such a transaction whereby one could say that actual inequity would be occurring. If any individual were to convince a someone who is unfamiliar with the cultural and market norms of what the fair value of a soda is to pay $1 million for one, most people would say that such an action would constitute fraud, and that the act is not just bad business but bad behavior which amounts to an injustice which should be corrected.
This moral sense of unfairness coincides with pure economic calculations of what a good deal is, and in many ways serves as an extra factor which helps guide economic actors to coordinate goods in an efficient way. And this moral factor is not driven merely by being able to sense when an unfair or far deal is occurring, but also by our inner desire to prevent an unfair deal from occurring in the first place. And it is through this aversion to inequity that we are able to develop standards of what is equitable and what isn’t, and these standards help guide market actors in their own economic decisions in a way that goes beyond mere utility. The buyer is not trying to simply increase her utility or meet her needs when buying a good, she is also trying to do it in a way which she believes is equitable for her. We can see this at work in real life, as in most cultures the greedy businessman who overcharges for goods and underpays workers is not seen just as being bad at business, but as a bad person who is a moral villain. This adds a layer of complication to the argument that actors in a self-regulating free market would simply refuse to do business with unscrupulous individuals, because in addition to refusing to do business with them people also have a desire to actually enact retribution on them. They do not simply want to harm them in the future by denying them business, they want to make them pay for past crimes as well, in which case simple market action of finding a different person to transact with may not be enough for people to feel that justice has occurred.
The best evidence for inequity aversion and how it works can be found in experiments done in game theory. The dictator game for example, allows one individual to split apart a group of units he has (often a cake or a pile of money in an amount which is easy to divide), where he has to give a certain percent to another person and a certain percentage to himself. In the dictator game, the “dictator” has the freedom to choose whatever ratio he wants, so that he could give himself a larger proportion or give the other person a larger portion, or split it 50/50. Studies of this game found that the most common choice the dictator makes is to keep it all for himself, but the second most common result is to split it 50/50.
This becomes even more interesting in another variation of the dictator game called the ultimatum game where the second individual can choose whether to reject the offer or not. While the dictator still has the sole power to split the units apart, the recipient now has the power to veto the deal so that both he and the dictator end up with nothing. Unsurprisingly, in the ultimatum game the dictator, having knowledge of this, is more likely to create fair deals from the start. However even more interesting is that these studies have shown that the recipients regularly reject unfair deals, so as to sacrifice their own reception of money in order to cancel the dictator from receiving the higher share. Offers of 70/30 and under tend to be regularly rejected.
These studies tell us a few interesting facts about human behavior. First is that individuals in the dictator position do deal out unequal splits, sometimes even dramatically so (100/0) when they feel that they are in a position to get away with it. And while the majority of individuals may give an unfair deal, there is a sizeable minority who do strive to give out more fair deals. Secondly, these games show that the recipients will gladly forgo the benefit of the deal if it means canceling out what they perceive as an inequity. Now there have been some criticisms of some that absolute value would alter the results, aka, if you actually could receive $30 in USD rather than $30 in monopoly money the recipient would be more likely to take the money and run. However, amazingly enough, experiments done to see whether this is true have shown that the results are not greatly altered, the absolute value of what is being divided has not been shown to correlate with any decrease in the rate of rejection by the recipient, which indicates just how powerful this moral sense of inequity really is in determining human behavior.
And of course these findings should have very real relevance for looking at modern economic arrangements. For example, we can see elements of this behavior with the labor movements of the late 19th and early 20th century. Workers addressing what they saw as inequity in economic arrangements would resort to strikes and demonstration which not only halted production, but also often cost them their very jobs. These actions have puzzled many as it seems to be the workers shooting themselves in the foot, after all, they probably would have gained more income if they simply took the deal offered to them by their employers instead of protesting against it. Those who demonize labor unions use these examples as evidence that labor unions are irrational and harmful to the economy. However, when the concept of inequity aversion is applied to these events, they seem to make more sense, as they could simply be manifestations of the same behaviors observed in game theory which show just how deeply ingrained morality and a sense of justice is in our economic thought processes.
Perhaps an even more interesting example could be in communism. Outraged with what they saw as the unfairness of 19th century predator capitalism, socialists and communists called for a total downfall of the system, and created numerous disruptions for society. The revolutions which popped up over Europe inarguably created unrest and probably hurt the poor and middle class just as much as they did the rich. The inefficiencies of socialism in those countries where communism did take hold probably prevented a growth of wealth to the degree that one could argue that economically the proletariat really had a zero sum gain, they probably could have made just as much if not more if they simply carried on with capitalism like those in Western Europe did (and indeed the wealth of Western European nations attests to this). Of course, when inequity aversion is taken into consideration, one could postulate that these actions may have had more to do with principal rather than any attempt at replacing capitalism with a more efficient system. In installing a communist regime they insured that the privilege and status of wealth was destroyed, the proleteriet in many ways impeded their own economic future so that they could ensure that the rich would not get away with this perceived injustice.
And of course this should bring in mind some of the aspects of Nash Equilibrium in the ultimatum game as compared to the dictator game, where by the proposers (dictators) in the ultimatum game are more likely to give a fair proposal than in the dictator game as they understand that if they give an offer which is too one sided they risk losing everything if the receiver vetoes. In terms of labor history in the West, the fear of losing it all prompted capitalist countries to develop policies such as workplace regulations and welfare states to help calm the proletariat and put them on a stronger footing. And while some may argue that this had the effect of reducing the marginal utility of capital in favor of labor, it helped to reduce the social strife that characterized that age. And if one were to consider to the losses which the capitalist would endure from large strikes and threats of revolution, one could argue that the economy actually achieved more wealth and growth after abandoning laissez faire for more social democratic policies as it reduced the incentives of laborers to sabotage the entire economic arrangement. By realizing what was at stake due to the news coming from places like Russia, capitalist countries realized that perceptions of inequity could cause dramatic problems if they were not calmed.
Of course, while there is ample evidence for inequity aversion in people, we still find that the dictator is still capable of cheating the other person if he can, a true free rider if you will. But, the less confident he is that he will get away with it, the less likely he is to attempt such a feat. Furthermore, variations of these games have found that the options available to the players can alter the level of altruistic behavior seen, with proposers who are overly generous under one variation of the game becoming shockingly licentious in another variation. This suggests that people’s levels of altruism are directly related to their environment and the options presented to them can alter what they perceive is to be expected of them. And what they perceive to be expected of them can cause seemingly good people to become much more abusive and unfair, which shouldn’t be surprising to those of us familiar with experiments like the Sanford prison study.
Overall the findings from these games suggest a picture of humanity which is far more complicated than the one put out by people like Milton Friedman. Humans are a species which is not only capable of acting in unfair and inequitable ways if they feel that they can get away with it or that it is expected of them, but also capable of amounting to extreme, self-defeating measures if they feel that an injustice has been committed against them. These contradictions should be of no surprise to those who are familiar with human psychology and the complicated behaviors of our species. However, to many these contradictions are simply over looked. Clearly the actions of individuals in these games and real life indicate that there are elements of utility function which remain tantalizingly intangible, which should call into question any models of economic behavior which attempt to present utility function in quantifiable terms. Because by all measurements, there really is no evidence that the vetoing by the receiver actually increases his utility at all, instead it indicates that an economic benefit can be costlier to an individual if it offends his inequity aversion, things like pride or desire to punish bad behavior are actually weighing in on utility function in a way which can be hard to predict and quantify. Yet they have obviously shown to be a very important force in allowing an actor to come to an economic decision.
Also, it must be said that the results from the dictator and ultimatum games can be remarkably mixed, the only real constants we see are that proposers in the dictator game are more likely to give themselves a greater share and that receivers in the ultimatum game routinely reject divisions less than 30%. However, there is still a great deal of individual specificity in just to what degree this occurs in a game and as stated earlier, slight alterations in the rules and options available to the players can yield different rates of unfair proposals and rejections. We see that human behavior is not a monolith, and while there are trends, it remains incredibly difficult to create a model of a uniform actor in regards to these concepts.
In terms of social policy, we should be very keenly aware of the existence of inequity aversion and seek to avoid those instances where inequity exists. And given the evidence that people can be willing to shoot themselves in the foot if it means exacting justice, it should be something that is to be feared. People such as Milton Friedman seem to believe that free market capitalism has all the tools to deal with the complexities of human behavior, including that of inequity aversion. However, the historical and real life data suggests otherwise. The property and contract rights in free market capitalism can put people in a position very similar to the dictator in these games, and like the results of the dictator game, some of these individuals choose to act in a way which gives them a bigger piece of the pie.* Workers, perceiving an unfair situation can resort to extreme measures to correct this injustice, in a way which goes beyond mere explanation of simple economic utility and reflects a level of seemingly irrational and emotive behavior within both the capitalist who knows his deal may anger workers and the workers who hurt their own economic utility in an attempt to exact justice.
And of course this is exactly the problem which was occurring in the late 19th and early 20th century. Thinkers like Friedman seem to suggest that courts and contract law could correct this, but this too is incorrect. The property and contract laws of common law countries evolved from medieval and proto-capitalist economic situations, and were more focused on buyer seller relations and distribution. While they worked well for the markets of the 18th and early 19th century, they really were not equipped to deal with the complexities of labor relations which emerged in the second industrial revolution. Many workers were employed under at will employment contracts, which pretty much meant that employers could do whatever they wanted. And the reactionary courts of the Lochner era were certainly not sympathetic to labor claims, and saw the capitalist as well within his rights to have the unequal bargaining power. The courts of the late 19th and early 20th century were not friendly to labor, and indeed to this day contract and property law is more oriented towards distribution than towards worker rights. There was also the element of the culture of the day, which held the capitalist in high regard. Laissez faire economics puts the entrepreneur and the capitalist on a pedestal, and allows them much freedom in their decisions, as it is said that from their freedom to make decisions, economic coordination can occur which benefits all of society. This notion is still very much present among laissez faire advocates today, who celebrate the capitalist and believe that his discretion should be respected at all time, as the decisions he makes fit within a bigger picture which in its aggregate makes life better for all of us. Of course this should make us all think of what I discussed earlier in how the options presented to the dictator can alter his behavior, if the capitalist is told to believe that it is crucial that he pursue profits and must follow his self-enrichment to the fullest extent, he may not only feel entitled to give himself a greater share of the income from production, but that it is actually expected of him to do so.
And it would be hard to argue that these attitudes and behaviors were not the norm in the late 19th and early 20th century, and indeed many capitalists did try to deal out very unfair deals. This of course created much strife, with an economic model and business culture conducive to allowing unequal distribution of profits from production and a legal system seemingly unable or unwilling to address these concerns, this was a breeding ground for inequity aversion to boil up and create unrest. So how did society get out of this dilemma?
The answer of course lies with the new form of law which emerged in that period: the regulation. Unlike the equity and contract law which simply seeks to correct a past harm, or penal codes which outright forbade certain behavior, the regulation dealt with behavior which was legal, and merely set in place guidelines as to how it should be conducted. By putting in place regulations for the workplace, the capitalist was restricted in his options, making him less likely to and less able to deal out unfair distributions. And with laws in place that guaranteed certain wages and benefits to workers, these unfair results were simply avoided in the first place. The evolution and emergence of regulation as a form of law was meant to directly address the flaws in laissez faire capitalism, one of which was the social strife caused by inequity aversion. The fundamental difference in outlook between regulations and the early common law, is that instead of waiting to correct a bad event, the regulation seeks to prevent it from occurring in the first place. This notion is much more in line with the realities of human nature, rather than simply “letting things play out” as laissez faire advocates desire, regulations are made with the understanding that sometimes underlying irrationalities in human behavior can create outcomes which rather than correct injustice, only serve to make things worse. Instead of falling for the Lockean fallacy that individuals will always try to be pro-social and avoid bad behaviors in a free market, regulations are made with the understanding that humanity is a complicated species which is very much capable of producing undesirable results if environmental conditions permit. Many of the strikes and riots which occurred in that era did not correct any bad behaviors, nor did they yield greater efficiency, they simply represented sunk costs that only hurt society and markets, sunk costs which could have been avoided.
Those who complain about labor regulations and welfare states may not be fully cognizant of the alternative scenarios which could arise if a laissez faire scheme was reinstated. The inconveniences of regulation and taxation (which as of yet have failed to destroy capitalist incentives and growth) may not just be wasteful spending, but rather important investments which fuel growth by allowing for greater social cohesion and lower incidences of strife. The normal market mechanisms and cooperative exchange are not enough, there is an element of strong arming in both labor and capital which are unable to resolve these disputes in a cohesive way from market mechanisms and common law alone. The attempt at creating policies which reduce labor tension and creates avenues for equity and reduction of social unrest are one of the main focuses of the highly successful social markets undertaken in the Nordic Model and Rhineland Capitalism. This also serves to add an objective element to arguments for helping the poor and protecting labor. Some have mistaken earlier arguments I made regarding the welfare state as simply being moralistic ones on the grounds that we should avoid them simply because it is wrong to be “mean” to the poor. But when concepts such as inequity aversion are taken into account, those who cannot be swayed on moral arguments alone should be swayed by purely utilitarian and materialistic ones in the fact that ignoring the concept of equity can create social harms which bring about real costs to society that can be quite devastating.
Finally, as one last point, it should be important to note that some studies of the ultimatum and dictator games have found that individuals from industrialized countries are more likely to deal out 50/50 splits than those who are not. This ties in to the argument I made at the very beginning of this article about the role that perceptions of equity play in people’s ability to make economic calculations. By having an industrialized economy with regulations, societies are also helping to create moral signaling which guides people as to what an actual equitable deal is. This not only makes us more proficient economic actors, but increases instances of organic altruistic behavior. Laws regulating behavior have an extra aspect to them, which while they are unable to govern morality, can signal to individuals just what sorts of behaviors are acceptable and which aren’t. Just as the capitalist was taught and expected to be inequitable by the culture of laissez faire, the culture of social democracies with regulatory framework and welfare states teaches individuals to act in a way which does not give rise to inequity aversion. This fits in with the social organism hypothesis I have outlined before, whereby successful societies through natural selection tend to choose actions which ensure their survival and wellbeing. And if we have a system whereby equitable behavior arises more organically and spontaneously, we will have a better functioning economy and a more cohesive society. And that is something which is better for all of us.
*At first glance this appears to be similar to Marx’s model of the capitalist firm, whereby the capitalist is ripping off the worker by stealing his labor value. However, Marx seemed to see this as a trait inherent to capitalism that was almost structural and would inevitably occur. In reality the results are a little more complex, some capitalists like Henry Ford or Costco CEO Craig Jelinek do go out of their way to give a fair deal to their workers, while others seem to go out of their way to take as much from their workers as they possibly can. It is not a monolith, and of course this does fit in line with the findings of game theory which indicate that inequitable deals have an element of individuality to them that can vary from person to person.
This article is first in a of series of articles I will publish over time where I try to point out the inconsistencies and intellectual dishonesty found within laissez faire philosophies. As much of the anti-government rhetoric we hear today is phrased in terms of being economic science, it is important to scrutinize the supposed scientific facts on which it is based. Today I will be focusing on the topic of negative externalities.
Externalities create a special problem for laissez faire economics, because if voluntary economic activity between two parties adversely affects an unrelated 3rd party, it constitutes an encroachment or trespass on that 3rd party. And since encroachment on the person is a cardinal sin in libertarian ethics, the existence of such would justify some sort of action to correct it.
Free market economists often try to minimize the importance of externalities or create the impression that most externalities are only caused by the state. Friedman and Hayek went as far as to label them “the neighborhood effect” as if to imply that they really aren’t a major deal. Dismissing externalities as the neighborhood effect is convenient for Hayek and Friedman because it implies: 1) most externalities are local in scope and confined to a small area; 2) limited to a single activity or transaction at a time; and 3) can be dealt with by individual action, for example, the residents of a street can bring an action in equity against a local business that is creating loud noises in the middle of the night and preventing them from sleeping. Thus, state action is not needed because citizens can take care of it themselves.
This conception of externalities is however horribly short sighted. Certainly a nightclub or paint factory is going to be creating conditions which local citizens can address on their own using the court. But what about externalities which affect more than just a few people? What about externalities which affect entire populations or society in general?
Perhaps the greatest examples of these kinds of externalities can be seen in pollution and ecological problems. There is no doubt that human activity can affect these, and that things like smog or water pollution can cause adverse effects on others with real economic costs. For example, exhaust from cars causes much pollution in cities, and smog can affect people’s respiratory health. Can an individual deal with these externalities in the ways that free market economists expect people to (i.e. courts of equity and individual action)?
This would be a difficult thing to do, for one, while no one would argue that car exhaust in cities can cause real costs to individuals, chances are that on an individual basis, the costs may not be large. I may only lose around $70 a year from car exhaust living in a city from costs incurred in buying allergy medicine and a few lost opportunity costs in dealing with or remedying respiratory irritation. That is hardly enough to justify bringing an action in Court, where attorney’s fees, court costs and the costs involved in expert evidence and data (to prove my losses were caused by car exhaust) would greatly outweigh my costs in simply buying some Claritin and dealing with it. This seems to negate the argument by thinkers like Mises and Coase that externalities can be corrected simply by having better defined property and individual rights. Even if this were the case, many externalities are so miniscule on the individual that the opportunity costs of bringing court action simply isn’t worth it. In fact, many individuals may not even be aware of what is causing their problem and know who to sue to get relief. And when the class of “victims” of such externalities are so large as to encompass an entire population, there is really no reasonable way for such an issue to be dealt with by individual actors alone. Yet, this is still no small matter, for even if my costs are only $70 a year, in a city of millions people that amounts to hundreds of millions of dollars being imposed on the city in its aggregate. Hundreds of millions of dollars of costs which are unable to be dealt with in the individual equity-like way that free market advocates hope for.
Now to get around this problem I could perhaps try to bring a class action suit representing all the citizens of the city, but then again, who will I sue? Should I sue every car owner? That would include the people I am representing, even myself, would we be suing ourselves? That would go into the realm of legal impossibility. Obviously if an externality can be traced to one source you know who the defendant will be, but when the externality is created by multiple players (such as a polluted river with dozens of factories on its shore) it becomes much trickier. After all, if a river is polluted it doesn’t mean that all factories are guilty, some may actually be extra attentive and go out of their way not to pollute, while others don’t care. Would it be fair to sue those factories who were not actually causing the harm? And many drivers too may go out of their way to have vehicles which don’t emit harmful emissions. Should I sue only those who don’t have clean cars? It would be nearly impossible to identify and find all those people, and the notion of enforcing an action in equity on an entire city from court decree alone is rather insane, you would merely be replacing government regulatory activity with a court, but in a much more contrived and dangerous manner.
Perhaps I should sue the city council itself for not having adequate emissions standards. And indeed in most common law countries this would be the most reasonable course of action. But of course, by doing that I am assuming that the government entity I am suing would or should have the power to create and enforce regulations in the first place, and this is something free market enthusiasts don’t want. If the government were to be set up so that it had no power to make such regulations over people, I could not sue it. And given the practical impossibilities of suing all drivers I may not have anyone to sue at all.
Of course if I am creative I may sue the car manufacturer’s themselves for not making cleaner vehicles, this could perhaps work, but it would not give me any immediate relief. I could perhaps get all new models to be made cleaner, but that doesn’t get the dirty cars off the road anytime soon. The cars which are causing my respiratory distress are privately owned and beyond the control of car manufacturers. Furthermore, the problem of car exhaust may only be an issue in cities themselves, by attacking the auto industry as a whole I could be putting unneeded costs on people in rural areas where car exhaust is not a major problem.
With all these options exhausted (pun intended), I would most likely be left to my fate and stuck with putting up with the encroachment on my person that is car exhaust. And free market advocates would probably agree that I should just do nothing, after all its only $70 a year it’s costing me. But if I live in a city of a million people that means that the total costs of the externality is actually $70, in which case we are talking about some pretty significant costs. Environmental externalities are some of the most insidious ones because of this, their effects on any individual may be small, but as a whole they can be enormous, and given their nature, they are nearly impossible to correct by traditional legal means. Climate Change is perhaps the mother of all externalities, slowly building up emissions which could cause catastrophe in the future.
Even $70 million may not seem like a whole lot to many advanced cities, but these kinds of costs from externalities exist all over the place, and can cause death by 1000 paper cuts if they grow too weighty. Indeed, one of the major causes of the business cycle in my opinion is that in a monetized economy many people have the impression that all costs and benefits can be expressed in pure monetary terms, however, at the peripheries we find many costs which are not being reflected in monetary terms. And although they are not being figured into monetary calculations, those costs are still present, and in the right set of circumstances the weight can be enough to have those costs come crashing in and bring the market’s monetary value back down to its real value.
In dealing with these smaller peripheral externalities like environmental pollution, state action is the most logical choice. By enforcing regulations which force actors to conform their behavior, we are actually sending a signal which does reflect the real costs and value of this activity. It is simple and easy for a city council to simply pass a law which requires emissions standards in vehicles, and in fact humanity has set up government for purposes such as these. Government regulation is merely a natural process used to identify and correct certain costs from things like externalities which are not being expressed in monetary calculations. By creating a regulation which imposes a duty on a private actor, the government is actually monetizing those costs so that the real costs are now being reflected in markets. And the more our monetized economy reflects reality, the better it will function.
Now this concept I am conveying is essentially that of external costs (which one can read about more in depth on the Wikipedia link I provided at the top of the page). As the argument goes, a free market is actually inefficient by allowing business entities to externalize the costs of certain economic activities onto the public which incur costs on society which are not reflected in prices and thus are not creating benefits to the same extent that they are creating harm. Thus while free market advocates claim that their system is one which adequately balances costs and benefits, it in fact does the opposite and creates extra, unnecessary costs which not only hurt individuals but hurt markets as well.
Pundits complain that regulations create costs for businesses, but that is exactly what they are supposed to do! A good regulation is supposed to take these real costs and make so that they are expressed in monetary terms and taken seriously by firms, this reduces externalities and actually improves the sovereignty of the individual because now they do not have externalities imposed on them. Libertarians should be embracing regulation if they truly care about the individual. Markets like regulations because ultimately externalities are bad for markets by imposing costs on it, governments with regulatory powers and markets go hand in hand and always have. The laissez faire characterization of collective action as emotion stirred mobocracy is pure nonsense, there is a real reason for its existence, it serves a vital role in helping to guide markets and reflect true costs. The libertarian characterization of government action often described by the Austrian School or Public Choice Theory is nothing more than a conspiracy theory designed to cast it in a negative light rather than an actual objective view of collective action and what its role is in human society.
Now it is true that some regulations may be done in a poor fashion, and some may not be needed at all. But then again, many business actions are done in poor fashion and are not needed. Humanity is an imperfect species and individuals, firms and governments are imperfect actors. As long as corrective measures as in place (for businesses the market can decide if an idea is bad and deserves to fail, for governments democracy, public opinion and successive administrations can correct bad policies), in the long run society should be fine, and the existence of regulatory institutions like governments are a good thing.
Thus externalities are downplayed and minimized by free market advocates for a good reason. A thorough evaluation of them reveals a fundamental flaw in free market logic which should actually create a case for government action. However since laissez faire economists have already made up their mind from the start that government is bad (as opposed to true scholarly open mindedness), they try to find a way to conveniently brush over this phenomenon and minimize it to the greatest extent possible. But any logical and comprehensive review of externalities indicates that these are real costs which can have real effects on society, and can not only hurt the individual, but can actually hurt society as a whole and have negative effects on markets. The most rational and reasonable way of dealing with many externalities is not simply to leave it up to courts and individual actions in equity, but to have a governing authority with the power to regulate and address these costs. And of course, by addressing these costs, government is sending signals and inducing behavior which makes these real costs actually reflected in the monetized market, which makes markets healthier and more reflective of a reality, a win-win for collectivist and capitalist alike. The attempts at downplaying this phenomenon are little more than intellectual dishonesty on the part of laissez faire advocates, who are more interested in enforcing a social ideology than they are on creating a prosperous and stable economic and social atmosphere.
Libertarian philosopher Roderick Long had a recent blog post entitled Against Maslow where he attacked Maslow’s hierarchy of needs. He started off by saying “To say that food and safety are more basic needs than reason and morality is essentially to say: “I am untrustworthy and will stab you in the back when the chips are down.”” He then went on to quote Aristotle, Seneca and Cicero basically saying that it is not natural for man to profit from his neighbor’s loss because then he would feel bad about himself.
Now I can understand not taking Maslow’s hierarchy of needs literally, obviously there are individual differences and one can certainly take issue with the order in which he takes them. But I still believe that the overall message of Maslow’s needs rings true, which is that individual moral and altruistic behavior can be amplified or repressed depending on the person’s basic needs (food, shelter, friendship, etc.) being met. If you deprive a person of some of their basic needs, we should expect to see more anti-social behavior, if you make sure that an individual is provided with those means, we should expect to see more pro-social behavior.
However Long seems to think that this isn’t the case, that morals and reason come first and foremost, that a starving man would not rob from his neighbor to fill his belly and would rather starve to death because stealing would be immoral and the pain of guilt would be worse than death. That view is absurd and utterly ignorant of just about everything we know from behavioral sciences, history and our own personal life experiences.
In battles individuals acting in their self preservation are capable of great cruelties, in situations of starvation like the siege of Leningrad people regularly stole, hoarded and even killed others to cannibalize on them. In prisons individuals display distinctly more anti-social behavior than on the outside (and its not just because they are anti-social to begin with, even “normal” people can engage in it in such environments). The famous Stanford prison study took student test subjects and put them in a prison environment, with some being guards and some being prisoners. The study had to be cut short because within a few short days riots by the prisoners and abuse by the guards occurred- and keep in mind these were NORMAL people from the outside put into this experiment.
There are countless other studies which overwhelmingly indicate that human altruism and morality is profoundly influenced by the environment which we find ourselves in. Starvation and desperation can bring out the worst in people. Deprivation and depravation can cause people to wildly violate previously held moral beliefs. Human morality is directly tied to our survival situation.
Now Roderick Long doesn’t look like he’s ever experienced true starvation (though he must be hungry a lot). I doubt he’s ever experienced true desperation before. Thankfully I, like most people in the developed world have never had to experience it either. However unlike Roderick Long I am educated in the social sciences and am very familiar with what we know about human behavior. And that is to say that a person’s conception of what is morally acceptable or not is directly tied to their current situation and what needs are being met and which ones aren’t. What Roderick Long is saying however is that all people are inherently moral no matter what, and that if they do bad things they are simply bad people, but most people are good. This is especially telling given that he advocates policies which would in all likelihood place certain individuals in desperate situations. He seems to completely ignore the social/behavioral aspects of these things and how anti-social behavior in poor communities actually acts to increase the costs of living for those individuals. Instead Long like many libertarians could be quick to dismiss those who are impoverished as morally defective, using higher rates of dysfunctional behavior as evidence of that. In reality it is the other way around, by being subject to deprivation, the rates of anti-social and dysfunctional behavior rise, thereby creating a feedback loop that can be nearly impossible to escape from. It is important that people understand the social sciences because they reveal the truths of human behavior to us. Charlatans like Roderick Long are either completely ignorant of the social sciences or are in complete denial of them to the degree that it amounts to delusional thinking. The scary part is that these people actually put themselves out to be legitimate academics whose policy recommendations are based on science. Nothing could be further from the truth.
Libertarians can get very sensitive about accusations that they don’t care about the poor and down-trodden. Many of them go out of their way to say that even though they oppose the welfare state, they don’t think poor people should starve or be given undue hardship, and that charity and the forces of the free market will be able to make up for decrease in welfare payments. But is that true? Would getting rid of the welfare state and having a “free” market actually help the poor? Or will it only make things worse for them? Let’s analyze.
What does the Welfare state do?
The welfare state provides payments and benefits to qualified individuals. Welfare states are found in countries all over the world. The most common forms of welfare are: healthcare payments and subsidies, pensions for retirees, workers compensation, food stamps and payments for the subsistence to households of lower income.
Libertarians try to argue that these programs are not needed, and that the individuals receiving benefits could be able to obtain sustenance on their own in a free market and that in fact the welfare state makes them dependent upon the state. But is that really true?
The average recipient of the so called “entitlements” are elderly and disabled individuals who are unable to be gainfully employed due to infirmity and old age. Medicare and Social Security, by far the 2 largest federal benefits programs, is entirely catered to either people who are over age 65 or individuals who have met the designation of disabled. These individuals are physically unable to work and even in a “super free” market their physical limitations would prevent them from obtaining income through employment. The bulk of the welfare state is in fact geared towards such individuals, and the increase in welfare spending which we commonly see on charts and graphs put forth by the right wing is actually more correlated with the overall rise in median age of the western world, rather than an expansion of government into our lives. The notion of the lazy poor “welfare queen” is largely a myth, the average recipient of public assistance is elderly or disabled.
The proportion of welfare state spending is correlated with median age. It appears that with many people growing older, and the financial crisis wrecking pensions and 401ks, we have many elderly and disabled individuals who simply do not have the capacity to independently provide for their means. Even with state level welfare programs, the majority of recipients remain elderly and disabled individuals who are unable to obtain employment in the private sector due to physical impossibility.
In other words, the bulk of the welfare state spending is not in fact “discretionary” gift giving done for fun, these are sunk costs reflecting conditions in the real economy. If the welfare state vanished overnight, these individuals would still need to provide for their food, shelter and medical care. The old adage “there’s no such thing as a free lunch” still holds true. Saying that cutting welfare spending erases those costs is but a fiscal illusion, the real costs of having an aging society and a large number of people who cannot physically provide for themselves will still be present. They instead will just be shifted elsewhere.
Can Charity and the Free Market really make up for it?
“Wait a second”, says the Libertarian, “even if the welfare state is mostly covering sunk costs, if those costs were shifted to the private sector it would be a good thing because charity and the free market are better equipped to provide for people.” This is the dominant rhetorical strategy that you hear about what could replace the welfare state. They claim that the power of “voluntary” behavior, via the capitalist markets and charity would be able to come up with better, more creative and more efficient ways of dealing with these sunk costs. This argument can be incredibly seductive to some. But is it just a bunch of bullshit?
For the vast majority of the recipients themselves, obtaining wage labor to cover the costs of living is simply not an option because they are physically unable to engage in productive labor. The idea of a regulation free economy being able to cure this is preposterous, their physical condition keep them from engaging in such activities. Thus they will always be dependent on others to obtain the means of survival.
One way that the non-working poor would be able to obtain the means of subsistence in the non-welfare state economy could be charity. The laissez faire advocates say that charities are not only more efficient in that individual “social entrepreneurs” can better calculate and plan a competent strategy for helping the poor. Because charities tend to be issue oriented (addressing singular problems like cancer research or running a soup kitchen) they can better target specific problems and eliminate them in a way which the top heavy government cannot. Thus through the same “voluntary” framework of the capitalist profit sector, charity allows for more innovation and creativity in addressing these needs, according to the laissez faire advocate. The libertarian also celebrates charity as it is “voluntary” while taxation is not, therefore the moral purpose behind it is more “legitimate” in their eyes.
This of course could not be further from the truth. Empirical measurements of the efficiency of charities are not encouraging. The levels of fraud and waste in charities are often higher than those seen in government finances. While some charities are more efficient than others, the notion that they are somehow more efficient than government is a myth, many charities have a dollar inefficiency rate as high as 95%. This is even more damning when you take into account that many charities falsify or misrepresent their data. Unlike public expenditures, which are subject to review by the population via things like the freedom of information act, charities are privately run, and have the ability to distort all figures and hide them from objective evaluation. But those studies which have looked at charity as a whole paint a bleak picture with rampant inefficiency, fraud and abuse. Sending your dollars to charity is certainly no more efficient than having them work through the welfare state, and in fact the data suggests that charity is far less efficient than public welfare.
Of course this efficiency measure is only measuring a dollar effectiveness in terms of the money you give being used for the designated purpose of that charity. And the designated purposes of many, if not most charities, have nothing to do with alleviating or targeting the sunk costs which the welfare state addresses. Many charities like the Make A Wish Foundation, are not only posting dismal efficiency numbers, but the actual purpose (giving children with cancer “one last wish”) really has nothing to do with helping the poor. The same goes for those charities which fund cancer research or clean up the environment or help homeless pets. Even those charities which do seem to target the poor (like soup kitchens for the homeless or vaccines for impoverished African villagers) are not addressing the whole problem, the non-working poor suffer because of a lack of consistent income, giving them soup or medical care does not help that. Now one could try to argue that this is simply because the government is crowding out those charities which would provide income to the poor, but this too is an unqualified statement. Before the welfare state existed there were no charities which could provide fixed payments to the poor and elderly in such a dependable way as something like social security did, and indeed if such charities had existed we would have never had a need to create a welfare state in the first place! The focus of charities before the welfare state (and in countries which have much weaker welfare states) are no more altruistic than those in the countries which do have strong welfare states. There is no indication that the welfare state is crowding out charitable giving, and it seems that in all likelihood the welfare state is in fact performing a task which charity is unable to perform. Thus the welfare state is not crowding out charity, but complementing it.
Even if one were to ignore the significant statistical evidence which suggests that charity is less efficient than the welfare state, there is still great doubt that charities could replace the welfare state in the libertarian laissez faire economy due to lack of taxation subsidy. Truth be told, most of the private “voluntary” charities are quite heavily subsidized, sometimes from direct contributions from the state, but even more so because of the favorable tax policies put in place. Charities as non-profits, are not taxable in the United States. This stands in contrast to for profit entities and individuals who do have to pay taxes. In such an environment, those entities which are deemed un-taxable are in fact being subsidized by the state via the supply side. Say we have 5 people, I could choose to give one of those people $10, and I would be in effect subsidizing them, however instead of directly giving them money, I could also take $10 from all the other 4 and not take any from the last person. The real effects in either situation would result in my actions making that person $10 richer than he would’ve been had I not specifically targeted him for special treatment. Tax cuts, when done in a disproportionate way
such as that, do in fact act as subsidies, whose real effects differ little from direct subsidies. There is also the even bigger factor that taxable contributions by individuals and businesses are counted as tax deductions. That is to say, if you give a portion of your income to a charity, you can deduct that from your overall income taxes. This is a powerful tool that incentivizes charitable giving, and in fact many try to take advantage of this to lower their overall income, and perhaps if they’re lucky bump themselves into a more favorable tax bracket. Thus, when charitable deductions are taken into account, the charities are even more heavily subsidized by government action than they are from being tax exempt. Take my example of the 5 people and giving one $10, say that I make a rule which says that the other 4 people will only be taxed $5 if they give $3 to the tax exempt man, this lowers their overall tax burden and allows them to keep even more money. Thus, not only is the tax exempt man making $10 from the taxation subsidy, but he is also making $12 from the contributions of others. Now my rules alone have just enriched him by $22 (or perhaps $20 if one is to count $8 as the real loss rate of the others). Charities are extraordinarily subsidized by our income tax rules.
However, the libertarians actually oppose income taxes, and most of them would like to see it disappear and replaced by sales or consumption taxes. This would actually destroy most of the government subsidy to charity. If sales taxes were to replace income taxes, the charities are paying them through purchasing commodities to achieve their purpose, rather than being subsidized through tax cuts they are now losing money by being at an equal level. And furthermore, with the abolishment of income taxation, the incentive for charitable tax deduction is destroyed, thus they lose even more money. Absent some kind of corrective system which allows government to reimburse charities for their expenses via a rebate of sorts, the charity is at a substantial disadvantage, and even if a reimbursement measure existed, the charity would still be at a substantial loss because of the charitable contributions would be lower. Thus under the libertarian tax system, charities will have a substantial deficiency of capital inflows and in all likelihood would not be able to have the same broad applications which they can under our current tax system. This when combined with the inefficiency, lack of targeting of the poor and fraud which naturally exists in charities makes for a substantial loss of aid to the poor. The idea that charity could replace the welfare state or even being to approach its effectiveness in a libertarian economy is thoroughly preposterous. However, these facts won’t stop the libertarian from arguing his next point, which is to play up the ability of the free market to help them.
The free market
The libertarian will say next that charity would not be needed as much in the laissez faire economy because the de-regulation will increase productivity and opportunity which allow poor families to achieve even more income. The poor disabled elderly person will be able to live off of the money which her family members gives her, and since the free market increases their job opportunities, the families of these people will be able to support them and they will be just fine.
This argument, like the charity one, is bullshit. First off, what if a disabled elderly person has no children? Are they to starve and be ignored? Are they to beg assistance from stranger? Are they to look for charity? Well, we already know the answer to the charity question, because chances are their sources of charity are dried up due to an unfavorable tax code. No, in all likelihood, the childless elderly would have no one to look after, and they would be forced into utter poverty and humiliation, begging for pennies on a street corner. There is no doubt that the quality of their lives is substantially reduced in the libertarian economy.
However those who do have children are creating an even more insidious problem. By having to support your parents, the children are forced to begin work at a young age. The sunk costs of living (shelter, food, medical care) are not going to go away, and they require immediate attention. This means that instead of going to school and investing in one’s self, the children of poor families must begin work at an early age, an often times this is in unskilled labor as they are the only jobs which untrained, unprofessional young people are qualified for. This traps them in diminished opportunities, because rather than making long term investments in themselves, they are forced to cover the costs of their elderly or disabled family members. Education requires large investments which do not yield a return until well into the future, but grandma’s medical bills can’t wait that long, they need to meet significant costs in the present. Thus, going to college and then medical school to become a successful doctor is simply not an option for the many of these children, even if they are smart and capable of doing so, the time it takes to see a return from these self-investments is simply too great, and the immediate expenses of paying for their disabled family members takes precedence over any goals and dreams they have. Instead of going to college they trim bushes or flip burgers to pay for the here and now costs of their disabled family members. Furthermore, by having all of his income go to pay for the fixed costs of supporting a disabled family member, the children of poor families are prevented from making savings of their own. Even after their disabled or elderly family members pass away, the child finds himself in the middle of life without an inadequate nest egg. And because he has no opportunities to climb the ladder to higher income (because he never had the chance to get an education), the child is himself at risk. Because he has saved less money, he must delay retirement to as late possible, this creates downward pressure on the job market, increasing unemployment among the youth and forcing them to bargain down their work hours, thus depressing wages and overall income to these families. After a long hard life of working for miniscule wages, the child retires with very little savings to show for it, the lack of investment and savings means that when the child reaches old age or becomes disabled, he too must rely on the assistance of his children. Thus a vicious cycle emerges which keeps families in poverty for generations. The laissez faire economy creates a permanent underclass, and indeed the observations of the real world confirm that the laissez faire free market policies create this.
It is a bitter irony that vulgar libertarians like Hans Herman Hoppe claim that the poor are simply poor because they have “high time preference”, as if to infer some kind of moral defect. Because the conditions which the poor are placed create this high time preference in the first place. This cycle creates a Charles Dickens society, and indeed the Oliver Twist world had the economic policies which libertarians seek to enact. However, this is also a substantial loss for all of humanity, as humans are being under-utilized. It is a profound waste of human capital, one which causes a potential Einstein or Marie Curie to waste away in a factory or fast food joint, in such a situation not only is the intelligent poor person losing out, but society is losing out as well by not having the positive influence of her human capital being used in the most efficient fashion. Such an economic system is not only a crime against the poor, but a crime against all of humanity.
The libertarian may still try to argue that the lack of regulations give the poor more opportunity to be creative, someone like lil Wayne lifted his family from poverty by a capitalist system which allowed for free enterprise. This argument falls woefully on its face however, the increase in opportunity by deregulation would be slim, US markets are already quite liberal, there are very few productive opportunities which would be opened up if they changed. Furthermore, most start-ups require capital investment to get going, and capital for the poor is scarce in this environment, most of it is held by the rich and only used to invest in acitivities which enrich the financiers. So someone like lil Wayne may be able to make money by performing some trick if it helps out a rich record owner, but the overall availability of capital to the poor is quite scarce. And when coupled with the time preference increasing vicious cycle described earlier, there is no doubt that it actually decreases opportunities substantially for the poor.
Furthermore, the de-regulation includes curtailing of Civil Rights legislation which allows for people to work free of discrimination. The point of things like the Civil Rights Act and Americans with Disabilities Act is to make sure that the most qualified person gets the job, thus employers cannot refuse to hire solely on the basis of things like race, age, disability. If these laws were repealed there is no doubt that there would be an increase in these behaviors, making job opportunities even more scarce for these people. A poor, African American family with a disabled elderly family member to support would be at an extreme disadvantage. Thus, the idea that de-regulation and Charity can better provide than the welfare state is absurd. There is a reason why populations all over the world have chosen to have welfare states, welfare states simply work better than the alternatives.
In conclusion we can see that the Welfare state is not merely payments done for fancy out of emotive and naïve concerns. The Welfare state is mostly paying for sunk costs which would exist regardless, to say that cutting these programs saves money is but a fiscal illusion, the costs would merely be shifted onto the private sector. And once shifted onto the private sector they disproportionately hurt the poor, disabled and minorities in a way that is thoroughly unconscionable. There is a reason why rates of physical and mental illness are higher among the poor, and it is not because they are somehow morally defective, but because our economic system is morally defective. Although our system could be doing even more, the existence of a welfare state has greatly mitigated these effects and improved the lives of millions of poor people, giving them both relief and the opportunity to make investments in themselves to lift themselves out of poverty. It is the backbone which keeps the middle class alive, but unfortunately it is under attack.
There is one demographic which does share a heavier load of the burden for the welfare state, and that is the upper and upper middle class, affluent, mostly white members of society. There is no doubt that if the laissez faire economy was put in place, these people would see more opportunity, because the potential Einsteins and Marie Curies of the lower class are wasting away their lives in menial labor to support their family members, this actually creates a surplus of opportunity for the affluent and privileged. Simply put, because there are less poor people taking jobs in these higher paying careers which require substantial self-investment, there are more of these jobs available to the wealthy who can afford to make these investments. Also, the lower tax burden for the affluent means they get to keep more of their money, which means they get to control more of the capital in society. As the laissez faire economy is entirely private (meaning that all capital must come from private individuals) this means that this demographic will have even more social control by controlling most of the capital and controlling all production. Thus if the poor wish to advance themselves, it can only be done through investment of capital coming from this class of wealthy white men, and as only, they are only going to make those investments which benefit them. All production is being done to enrich this class of people, everyone works for them. Not only do white affluent males gain more economic strength in this model, they also gain more social power by being the sole source of capital for all production. This becomes even more curious when you consider that the majority of the advocates of laissez faire and the intellectual thinkers behind it, are in fact affluent white males, the very demographic which would benefit the most from the system they advocate. Perhaps they are simply well intentions but misguided thinkers, who legitimately think that their system helps the poor. But a more likely explanation is that they simply have lop-sided priorities, they value the benefits this system gives them more than they do the burdens it saddles on the poor. They know that there are substantial holes in their argument that laissez faire helps the poor and they simply choose to ignore them.
Just as the advocates of slavery and segregation claimed that their system helped blacks by upholding “the natural order of things”, the advocates of laissez faire claim that their policies will help the poor by doing the same. However, in reality, just like slavery and segregation, laissez faire is nothing more than a system which benefits affluent white males at the expense of everyone else. These individuals seek to reverse the great progress we have made, they seek to misinform the populace and make false claims. We must arm ourselves with the truth, for if we lose these laws and programs, it may be years before we ever get the chance to regain our progress. America it seems, cannot afford to repeal the welfare state.