A Return to Gainesville

Returning to the issues of today has been overwhelming.  Last year, I suffered a health crisis nearly taking my life.  I’ve since returned to a better place, though I remain deficient.  So many issues are current and pressing, and it’s difficult to know where to begin.  The separation of children from their parents by Trump’s border gestapo seems in need of triage, though Trump seems to have understood that harming children isn’t a reasonable means of coercing cooperation from Democrats on the wall funding.  We could examine a myriad of issues, including North Korea, DuPont’s coverup of the dangers of teflon, Scott Pruitt’s $43K phone booth, the ongoing Mueller investigation and Trump’s repeated witness tampering, and so on.  But instead, I’d like to talk briefly about a journey I made recently.

Home Again

To support my best friend during a difficult loss, I returned to my hometown of Gainesville this past month.  Cathartic and lengthy, my visit permitted time to get a good look at how the city of my youth has changed in the eighteen years since I lived there, along with a reunification with my college history professor, Pat Ledbetter, faculty at North Central Texas College (NCTC), and my high school calculus instructor, E. Clyde Yeatts.  It just so happens that my twenty year class reunion transpired during the time I was there, as well as a town hall by Beto O’Rourke, Democratic representative from El Paso, and most recently candidate for the upcoming U.S. Senate election, pitting him against Ted Cruz.  I attended the latter, eschewing the former.  The town hall was lively and energized, though a fair amount of shallow, rally-around-the-flag banter and gladiator hero worship persisted.  I did manage to query Beto on economic issues during the question and answer, available around 50:00 or so in his recorded version.  The issues raised there, along with the drawn, sober look at my city of origin, are topical of this post.

IMG_1975

Beto, Piketty, and Income Inequality

My question specifically asked about the approach one might take in addressing income inequality, something we all understand, at least in the first order.  I referenced Thomas Piketty, the eminent French economist with rather dire predictions for industrialized nations with respect to the current balance of rents and labor.  In Capital in the Twenty-First Century, he describes an economic dystopia in which the super-wealthy need not invest in labor, as the return-on-investment (ROI) for capital exceeds that of material investment.  Put more simply, money by itself makes more money than any kind of investment benefiting the apocryphal middle class, so investing in manufacturing, science, health care, and so on, simply isn’t as lucrative as investing in real estate, credit cards, and the like.  Piketty describes this as a grim portent of political instability, as a larger and larger share of household wealth becomes inherited.  Once all capital is owned and controlled by heirs, the extreme poverty imposed on working class and indigent people reaches a breaking point.  It’s worth considering that fact for a moment.

The Role of Public Relations

Since the dawn of the public relations industry, sociologist Anne M. Cronin suggests that we’ve been told rather feverishly that occult forces, be it God, the market, patriotism, and so on require that we accept the station to which a would-be wizened corporate and political elite may assign us; it’s a kind of brainwashing, guaranteeing society-wide compliance in tyrannies historically entrenched.  Walter Lippmann said it best:

[t]he public must be put
in its place…so that
each of us may live
free of the trampling
and the roar of a
bewildered herd.

The framers believed this problem compelled the formation of a strong government:

[t]he primary function of
government is to protect
the minority of the
opulent from the
majority of the poor.

Edward Bernays, nephew of Sigmund Freud, spear-headed the wartime propaganda from within Woodrow Wilson’s administration by exploiting his uncle’s pioneering work in psychoanalysis to shape public attitudes.  As the great war ended, he recognized that these same mechanisms ought apply to peacetime attitudes as well; he successfully increased revenue for the American Tobacco Company after originating the “torches of freedom” campaign, a means of convincing women to smoke by equating the cigarette with a penis.  Subsequent to this and similar campaigns, Bernays emerged a superstar and corporate darling, fathering public relations, a watershed science capable of convincing very large numbers of people to purchase the unnecessary and ignore the uncomfortable.  These facts are public record, and they’re invaluable in partially contextualizing how my hometown has deteriorated since the post-war boom.

Gainesville, Boom and Bust

Gainesville prospered significantly following the second world war, benefiting principally by railways constructed in the nineteenth century and the intersection of two significant highways, one being I-35, an federal interstate running from Mexico to Canada, the other locally known as state highway 82.  Armco Steel and National Supply, steel and oil pump manufacturing companies, co-owned a plant where my grandfather and many other Gainesvilleans found employment.  The population dipped with the closing of the plant in the 1980s, and big retail became bigger retail, closing hosts of stores.  Thus, the Gainesville of my childhood was stagnant economically, and downtown rapidly became a series of vacant buildings and warehouses.  My grandmother recollected to me that municipal leadership of the town deliberately limited growth, though I’ve not been able to corroborate that.  Principal employers in the city were the local college, the independent school district, Weber Aircraft, and retail, grocery stores, and restaurants.  My mother worked for Weber for a couple years, but lack of jobs led my family out of town during the 1990s, though we returned for my high school years.

Gainesville Gambles, Floods, and Stagnates, Regressing Toward Trump?

In the years since I left town, the Winstar Casino opened across the Red River in rural Oklahoma.  Despite the pervasive religious overtones of Cooke County, its primary source of employment now is said casino. Crime rates spiked after my exit, though they settled downward in the decade to follow; a devastating Biblical flood struck the town in 2007, killing some people and destroying considerable property.

Is it possible many of the run-down buildings I saw in my recent visit were condemned after the flood?  I’m uncertain, but the demographics have changed, the poverty seems deeper, and Gainesville seems more dangerous to me.  On the other hand, some downtown stores have appeared, and there are places in town reminiscent of a more economically rich history.  In any case, should Gainesvilleans accept their station on the strength of the word of Trump or some other demagogue?  Considering that capital is more valuable than labor, the future is grim for a city with a quarter of residents living below the poverty line, per City Data.  A large number of toddlers and slightly older children fall into this category.   Eight-in-ten Gainesville voters selected Trump in the 2016 election.  This must include a large fraction of those quarter denizens below the poverty line.  With such striking numbers, I may very well know personally every person in Cooke County who didn’t vote for Trump…

Where Next?

To escape poverty and lack of jobs, many of us expatriated elsewhere; I also find the sharp support of far right politics reviling.  To that end, I found employment in the midcities, then Atlanta, and finally Seattle.  My good friend Pat Ledbetter, whom I’ll interview for this blog in the days ahead, mentioned to me that Texas needs “thinking people” now more than ever, and considerably moreso than does Seattle.  Though a permanent return to Texas isn’t on the radar, I do think it’s time to offer aid to my city of origin.  Beto’s campaign seems a good starting place.  He, like Bernie Sanders before him, refuses funding from PACs, relying on local and individual donors.  Perhaps there’s more to be done.  Pat, my former teacher Clyde Yeatts, and I will have to cogitate…

 

War No More : A Book Review

The Chalice of  War

Recent events with respect to our so-called enemies abroad, including Donald Trump’s

  • fruitless, impeachable knee-jerk bombing of Syria earlier this year, an act whose legal justifications rival the effectiveness and stated objectives in vacuousness,
  • inflammatory posturing toward Iran in an incredibly dangerous perpetuation of Washington’s Iranian foreign policy over the past thirty years, and
  • saber-rattling against North Korea as tensions escalate, virtually ignoring long proposed nuclear freeze proposals articulated by Noam Chomsky, proposals requiring the impossible act of American military retreat in that piece of the world,

underscore the precarious position in which we find ourselves in our 200,000 year run on this planet.  In the midst of these tumultuous times, there exists a specter looming over virtually all mainstream discussion, so far out of mind as to conjure moronic climate change denialism, differing in that most Americans, whether convinced of the overwhelming scientific evidence or not, are at least aware of the debate.  The bias should seem clear, as Trump’s illegal attack on Syria should indicate : articulate opinion virtually fell into lockstep admiration of Trump, for example,  the New York Times remarked,

in launching a military strike
just 77 days into his
administration, President
Trump has the opportunity, but
hardly a guarantee, to change
the perception of disarray in
his administration.

Glenn Greenwald of The Intercept pointed out this and many other instances of elite media reversal on Trump the instant bombs begin falling.  There exists a chalice of war, and Americans have been drinking deeply of it since the second world war; the mindset is pervasive, infiltrating our holidays, movies, video games, and most state-sanctioned celebrations of patriotism, whatever that actually happens to be.  Believe it or not, it hasn’t always been this way.  And there are a few voices rising above the rest to remind us.

David Swanson : Today’s Eugene Debs

I first encountered David Swanson’s works in the early days of George W. Bush’s warring administration. I had learned in college about the myriad military misadventures of American presidents, including

  • Harry S Truman’s illegal war of aggression in Korea, events out of which the brutal North Korean regime emerged,
  • Dwight D. Eisenhower’s acts of aggression in Guatemala to combat nationalism,
  • John F. Kennedy’s raving mad stance toward Cuba (to be discussed in an upcoming article in The Spanish Pearl series), and aggressive war against South Vietnam,
  • Lyndon Johnson’s lying about the Gulf of Tonkin incident to promote war in Vietnam and support of Israel’s illegal invasion of Lebanon,
  • Richard M. Nixon’s aggressive wars in Cambodia, Laos, and Vietnam, as well as the overthrow of Salvator Allende in Chile on September 11, 1973, the first so-called “9/11”,
  • James E. Carter’s support of Indonesian dictator Suharto in committing genocide against the East Timorese,
  • Ronald M. Reagan’s invasions of Grenada (a tiny defenseless island nation), bombing of Libya, drug runs in Columbia, war-making in El Salvador and Nicaragua, and propping up of Iraqi tyrant Saddam Hussein as a shield against Soviet influence in Iran,
  • George H.W. Bush’s invasion of Panama and escalation of the Gulf War,
  • William J. Clinton’s bombing of Serbia in 1999 despite warnings of heavy casualties among fleeing refugees,
  • George W. Bush’s invasion of Afghanistan and Iraq, the latter of which Chomsky labels the supreme crime of the 21st century, and
  • Barack Obama’s international drone assassination campaign, killing perhaps thousands of civilians in Yemen, Somalia, Pakistan, and Libya,

and the list could include crimes committed before 1945, though we’d require another article.  Suffice it to say that George Washington’s extermination of the Iroquois, Andrew Jackson’s mass murder of natives, destruction of native food sources by Ulysses S Grant, and the invasion and occupation of the northern half of Mexico by James K. Polk are but a few instances in the legacy of bloodlust the Europeans bore and continue to bear in conquering the western hemisphere.  We’ve mentioned the Spanish American War more recently as a light case study, and with this large body of historical evidence, it seems pretty clear another approach is warranted, especially when considered with respect to the forecast of virtually every credible intelligence agency in the world : violence generates rather than diminishes the threat of what we like to think of as terrorism.

David Swanson has long argued that not only is there an alternative to war, there is no alternative to peace.  A modern day Eugene Debs, this philosopher and activist has traveled the nation and the world to promote an ideology and dialog badly lacking in elite support.  Of interest in this article is his 2013 book War No More : The Case for Abolition.  In it, Swanson adeptly confronts many of the persistent myths, including the inevitability of perpetual war, the humanitarian war, the defensive war, the stabilizing war, and the like.  He also explains, quite effectively, the post-war shift of American culture in his earlier work War Is A Lie.

A Culture Drunk on War

Long before the Japanese bombing of Pearl Harbor in December 1941, Americans often were only very reluctantly conscripted into battle to fight for elite interests, as discussed earlier in the case of the Spanish American War.  We know now that desertion and reluctance to fire weapons at other human beings resulted in colossal ammunition waste in most of the wars through the twentieth century.  The psychology is simple, Swanson explains :

[m]ost human or primate
or mammalian conflicts
within a species involve
threats and bluffs and
restraint.

War is unnatural, he argues, citing further evidence that the grooves left in early human skeletal remains are bite marks from the large land dwelling predators we’ve since extinguished rather than battle scars from tribal skirmishes.  This in fact echoes earlier commentary on the most native violence experienced by Columbus in his expedition : light sparring with sticks and the like, only very rarely resulting in serious injury.  The conquistadors’ violence wrought upon the natives was something else entirely.

In any case, Swanson remarks that since the second world war, the military has become increasingly efficient in indoctrinating soldiers to kill.  A parallel public relations program has glorified war in film, print, and now video games, often with heavy consultation from weapon manufacturers and military personnel.  One need only look at the preponderance of blockbuster films these days to experience the influence.  Further still, military recruiters routinely lie and glorify the military way of life, enticing the poor with a phony carrot rather than the stick of the draft in earlier wars.  As before, the poor fight and die while elites shield themselves from the draft, such as

None of this should come as a surprise, as only a small percentage of human beings can truly stomach killing others.  It’s large enough that in our population we routinely hear of such violence, but, as Swanson often suggests with rhetorically surgical precision, imagine if the news stations spent as much time on nonviolence as they do violence.

Swanson helped me begin to identify the tremendous propaganda toward state violence after I read his comprehensive 2010 book War is a Lie; I had noticed in recent years, something he systematically demonstrates in his works, that a large fraction of cinema previews included a vast array of military tools, soldiers, and their deployment to the “battlefield,” a term Swanson very cleverly disabuses as an archaicism.  He points out that virtually every popular video game on the market features extreme amounts of gun violence and murder; though I am indeed a great fan of the game Skyrimvirtually anyone paying attention to the gameplay mechanics should notice that both men and mer would face imminent extinction with the pervasive, unremitting violence everywhere.  Skyrim isn’t alone, as the most popular video games these days exalt wholesale violence, enabling a broad range of sociopathic choices.  If a player kills virtually all citizens of the realm, who would grow the food, tend the livestock, write the books, etc.?  More broadly, one can note that almost all the holidays we observe in America are tied to violent acts, including, ironically, Easter, Thanksgiving, and the whole of Armistice Day, Memorial Day, Independence Day, and the like.  Our national anthem celebrates the violence of the Revolutionary War as a boon for freedom, despite the fact that life for 95% of colonists and virtually all natives, slaves, and women changed or worsened under the new management.

In any case, Swanson points out that dissidents are labeled derisively “anti-American” unless they blindly support ongoing wars under the mantra “support the troops,” even after elite sectors themselves disavow wars as unwinnable, strategic blunders.  Chomsky correctly points out that America is the only non-totalitarian state where such a notion of “anti-state” exists; Germans opposing Angela Merkel would never be described idiotically as “anti-German”.

Moreover, Swanson strongly argues the malignant effects of war on troops, rendering the catechismic “support our troops” phrase all the more ridiculous : we must continue the killing to honor the dead, lest we savage their memory.  I’ve witnessed dear friends and family thank troops publicly for their service, despite our military being the basis for human sacrifice : eighteen year-old boys must go die in some foreign land so we can ward off the undefinable, largely imaginary evil forces of tyranny, much like ancient cultures sacrificed humans to appease the gods of harvest.

I’m familiar with many mental health professionals who can confirm the extremely harmful effect of war service on human beings; post-traumatic stress disorder, coupled with loss of limbs, eyes, hearing, and the like mar not just our own soldiers, the only people elite sectors depict as “people,” but wreck nation after nation, killing millions and driving millions more into exile, prostitution, and violence.

The drone strikes themselves have raised a new generation of terrorists; case in point is Farea al-Muslimi, a young Yemeni student who spread good tidings about America back to his village until it was attacked by drones to kill an unarmed man accused of terrorism.  Instantly, a village hates the United States, despite the ease of placing the suspect in custody rather than destroying parts of their village and killing civilians.  This story isn’t unique, and it takes genius not to recognize how these policies further imperil both innocents and ourselves.

Even the non-partisan Brookings Institute recently warned that Trump may have the means, militarily or otherwise, (but not necessarily the mind) to finally

think seriously about
ending North Korea’s
nuclear ambitions by
creating a new order
in Northeast Asia.

Consider this in light of Chomsky’s aforementioned comments from a Democracy Now interview in April :

no matter what attack it
is, even a nuclear attack,
would unleash massive
artillery bombardment of
Seoul, which is the biggest
city in South Korea, right
near the border, which
would wipe it out, including
plenty of American troops.
That doesn’t—I mean, I’m no
technical expert, but as far
as I can—as I read and can
see, there’s no defense
against that.

In other words, stray too far into that dark place in which Kim Jung Un feels no escape, and the human cost could be tremendous.  Is there an alternative?  One need only read history, a sample of which I’ve written here, to know that America typically preaches peace and diplomacy, yet we maintain self-proclaimed nuclear first strike power, occupy over 800 military bases in 80 foreign countries as reported by The Nation in 2015, and have committed the supreme crime of aggressive war innumerable times just since the second world war, generally arguing publicly the desire to sue for peace, to supplicate the needy in humanitarian crises, or, earlier on, simply saying nothing.

Freedom isn’t Free, But War Won’t Buy It

It turns out that war fails to improve our freedom, as we’ve argued repeatedly here echoing the writings of Howard Zinn, Noam Chomsky, Glenn Greenwald, and Amy Goodman : dedicated resistance and a cohesive, powerful labor movement have so far proved to be sufficient, if not essential to the civility and freedom we enjoy in the modern era.  Swanson argues, alongside them, that war historically always has the opposite effect, reducing freedom while fomenting unrest and division.  One need only look at the various wars to discover that many dissenters have gone to jail, including Swanson’s historical doppelganger Eugene Debs; Debs encouraged antiwar speech during World War I.  War resisters during the Revolutionary War faced violence, confiscation of property, murder, and expulsion to Canada.

During World War II, the government imprisoned Japanese and German Americans.  My grandparents worked at Camp Howze, a POW camp near my hometown of Gainesville, Texas.  Woodrow Wilson argued during World War I that “disloyal” dissidents

had sacrificed their
right to civil liberties.

We can certainly recall suppression of resistance to Vietnam, and the immediate passage of the fascist PATRIOT Act upon the second 9/11.  The point is, not only does freedom not flourish under war, Swanson argues that it cannot flourish.  Learning the former must precede the latter, and Swanson articulates a very strong argument for both.  So what of the good wars?

Apologists for War

Most rational Americans have come to believe that war is primarily a tool for control.  During the Vietnam and Korean Wars, Americans were conscripted to fight for what the Pentagon Papers revealed to be control of the “tin, oil, and rubber“, among other economic objectives. The Project for a New American Century (PNAC), mentioned in earlier posts, was a late twentieth century neo-conservative think tank whose manifesto stumps for conquering Iraq, Afghanistan, Libya, Syria, and Iran in order to secure American interests in the region.  Swanson raises the intriguing coincidences of both Iraqi and Libyan leadership electing to deny the dollar preeminence in oil purchases, Hussein opting for the euro and Gaddafi the gold dinar, both immediately preceding our violent intervention; certainly intelligence agencies in America and elsewhere knew very well Saddam had no weapons of mass destruction remaining.  Lost in this is that Saddam offered to exile himself, handing Iraq over to NATO provided he could abscond with one billion dollars; considering the trillion dollars the war has cost, wouldn’t that have made more sense?

Swanson reminds us of Eisenhower’s admonishment of the rise of the military industrial complex, a largely unaccountable cadre of business and military interests hell-bent on self-sustainment in the face of an increasingly peaceful world.  Ironically, as Swanson points out, war doesn’t make market sense, as it would be more efficient to spend the money on renewable energy, infrastructure, education, health, and the like, even aside from the pesky problem of human life.

In any case, PNAC’s manifesto laments that we must

fight and decisively
win multiple, simultaneous
major-theater wars

to preserve the so-called “Pax Americana”, conceding that the American public no longer will tolerate protracted wars.  Despite years of carefully composed propaganda and rhetoric, the political elites have yet to convince the public that war with Iran is necessary.  Trump’s wild approach may prove fatal in this instance, as he, like power-mad elites preceding him, fumes when “enemy” nations comply with sanctions.  Nonetheless this reluctance speaks to the increased civility of society.

On the other hand, Americans continue to support war mythology with the firm belief that at least in the Revolutionary War, the Civil War, and World War II, we defeated tyranny, slavery, and fascism, respectively.  We’ve already addressed the farce that is the first of the three above.  The Civil War was easily preventable through diplomatic means, though the times were different.  Rather obviously, however, the union states simply could have attempted to purchase the slaves, perhaps to the tune of one billion dollars, as opposed to spending three billion to destroy countless cities and leave a buyer cultural resentment still harming us today (in an upcoming article, I’ll try to address the notion of white privilege and the legacy of slavery.)  If the north had really wanted a peaceful settlement, it could have permitted secession and encouraged slaves to flee into the free states.  The dirty secret is that the north no more wanted freed slaves than did the south.  In any case, Swanson debunks these wars with ease, leaving us with the last ace of the warmonger : the second world war.

Swanson Takes Down the “Good War”

For brevity, I’ll leave most of Swanson’s arguments about the so-called “good war” to the reader.  But suffice it to say that America was already in the war long before the Japanese bombed Pearl Harbor, actively cutting off supply lines and providing weapons and equipment to the European allies.  Truman famously quipped on the Senate floor that we should

help the Russians when
the Germans are winning
and the Germans when
the Russians are
winning[... s]o each
may kill off as many
as possible of the other.

Are these the words of a man pursuing peace and freedom?

Swanson further argues means of preventing Hitler’s rise through a less ridiculous settlement than the Treaty of Versailles at the conclusion of the first world war, deescalation of his militarism through discussion and diplomacy, and rescue of the Jewish refugees initially expelled from Hitler’s caustic, totalitarian empire.  Instead, we, along with Britain and France, isolated Germany, refused to aid the refugees, and in our case sold weapons to Britain and France while strengthening the Pacific navy, cutting off Japanese supplies in Manchuria, and conducting military exercises off the coast of Japan.  Americans actually held a rather favorable view of Hitler, as anti-Semitism was rampant among elite sectors here; both Joseph Kennedy and Prescott Bush, fathers of presidents-to-be, either held business dealings with or openly supported the Nazis even after America officially entered the war.  Fanta became Coca-Cola’s means of remaining in Germany, and Henry Ford placed a portrait of Hitler on his desk.  In fact, when Confessions of a Nazi Spy, a thriller starring Edward G. Robinson, premiered in Milwaukee, pro-Nazis burned the theater to the ground; even the far right Senators of the day wished to investigate Robinson and the film as Jewish propaganda angling for American entry into the war.

Enshrining the Holocaust only became important to the American political class with Israel’s invasion of Lebanon in 1967, an unsolicited but helpful gesture in advancing American hegemony.  Though there’s much to add, suffice to say the one good war killed over seventy million people, or equivalently twenty percent of our current population.  Was that really necessary?  We touched on the atomic bombs dropped in 1945 at the conclusion of the war.  Are we better off for creating them?

A Great Read

Like all of David Swanson’s books and articles I’ve read, he powerfully confronts the folly of pro-war propaganda and the arguments, lofty or low-brow, for the perpetuation of war.  He eloquently rearranges the pieces of the puzzle to expose the idiocy of the arguments advanced by the state in support of violence, such as this gem with respect to our government offering protection to people facing chemical warfare :

[k]illing people to
prevent their being
killed with the wrong
kind of weapons is a
policy that must come
out of some sickness
[... c]all it Pre-
Traumatic Stress 
Disorder.

I highly recommend this and his other works, as he, like the great activists before him, tells the truth.  His words are more prescient than ever before as we confront the problems of the twenty-first century.

 

The Conservative Nanny State : A Book Review Part Six : Small Business, Taxation, Public versus Private, and Roots of Mythology

In our concluding article analyzing Dean Baker’s The Conservative Nanny State, we touch on the role of the archetypal small business, taxation, and the persistent, seemingly immortal debate on private versus public infrastructure, all with respect to the pantheon of the mythology.

Small Business Blight

Baker argues that the small business occupies a unique, critical niche within the mythology : nanny state purveyors sell policy decisions often on the basis of how said policies affect small businesses in aggregate, based on the pervasive perspective that small businesses are a highly desirable feature of the economy.  Analysts across the political spectrum laud small business in editorial after editorial, such as left-leaning Huffington Post and right-leaning Forbes.  It’s so deeply embedded in our framework that to even ask whether small businesses are, in fact, better for the economy remains anathema.  Arguments range from job creation, financial independence, patent creation relative to big businesses, and the like.  Of course, we previously discussed whether patents really do represent innovation, to say nothing of encouraging it.

Before answering either way, Baker lists cases in which nanny state enthusiasts leverage the widely accepted propaganda to argue policy.  For example, Congress very nearly repealed the estate tax in the early years of George W. Bush’s administration, offering up the hapless small farmer as a would-be victim of the vicious “death tax.”  Baker argues reasonably well that the example is mostly nonexistent, owing largely to the zero bracket and the fact that most of the so-called small businesses affected are not genuine small businesses, but rather partnerships designated to be tax shelters, defined by the Congressional Joint Committee on Taxation.  The New York Times remarked in 2001 that the American Farm Bureau was unable to locate any families who lost their farms due to the estate tax.  The Center on Budget and Policy Priorities suggests recently with Donald Trump’s mad push for dissolving the tax that the true effect of repeal is shielding most inherited wealth from any taxation, as much accumulated wealth among those touched by the tax is untaxed income.  A similar argument applies to Trump’s insistence that most taxes, incidentally perhaps the only taxes he’s paid recently, should be lower to encourage economic growth.  Another interesting discussion on this topic is Nicholas Johnson’s article analyzing the actual effects of a 2012 tax break in North Carolina, promoted of course disingenuously as small business support.

In any case, Baker moves on to argue that the job creation precept of small businesses is actually misdirection, echoed later by The Fiscal Times : small businesses destroy perhaps as many jobs as they create, promoting uncertainty and churn in employment.  Further, he observes that tenure at larger firms is longer, benefits are better, and stability is greater.  More recently, a refrain from critics of the Affordable Care Act is the would-be damage to small businesses.  And yet the mandated requirements actually nudge employment quality in small businesses closer to that of larger firms.

The ACA debate hints at a larger argument that regulation inherently hurts businesses, reliably trumpeted by the conservative Heritage Foundation.  Of course, their arguments, promoted by debunked supply-siders, mandate we accept that a job is universally good, irrespective of the quality or pay.  Their predictable argument is that regulations

may be treated as "unnecessary"
if (1) the costs they impose
exceed the benefits they produce,
or (2) even though they produce
benefits that may exceed costs,
they do so in an unnecessarily
costly manner because of an
inefficient method or approach[.]

Their optimization strategy places money first, captured nicely by the following : if I successfully lobby the government to revoke that pesky “regulation” preventing me from lawfully confiscating my neighbor’s cache of groceries, I’ll save money.  Further, it is indeed inefficient for me to simply not have access to my neighbor’s food, as I have to obtain my own food otherwise.  How does this differ?  Multiply this argument into “externalities” such as dumping lead and other toxins into the water supply and relaxing safety regulations in manufacturing, and one begins to appreciate that more than the job is at stake.

Baker argues further that small businesses receive powerful nanny state protections, such as adjusted tax framework, reduced interest loans, lax safety protocols, minimum wage exemptions, and laughably ineffective self-disclosure regulation of environmental violations.  It turns out that the tax framework permits small business owners to deduct all manner of goods and services, perhaps required regardless of whether that person is a business owner (such as an automobile or a computer), costing the taxpayers.  Further, government subsidies for loans to failed small businesses can be staggering, described in Forbes and a few more hysterical right wing libertarian blogs.  That is, we the taxpayer foot the bill for unstable, mostly failed businesses who enjoy nanny state protections against labor, wage, and environmental regulation and means of pocketing breaks.  He correctly observes that citizens requiring TANF benefits to feed their children receive near universal excoriation while failed businesses and illegal deductions rarely enter the discussion, let alone suffer bad press.

Admittedly, small businesses contribute some desirable dynamism to the economy, but the usual question is whether they are an optimal instrument within free market or social experiment framework; if they were, they wouldn’t require such strong protections to succeed.

Taxes, Taxes…

Baker argues rather holistically against the ignorant perspective of nanny state promoters, that taxes are a voluntary donation.  I’ve listened for decades to family and friends bemoaning of the prospect of a single red cent of their hard-earned money finding its way to welfare recipients.  I remarked that the rate of welfare fraud, coupled with the infinitesimal fraction of discretionary spending moving into the hands of these people is virtually zero; money of higher orders of magnitude flows freely into the mass murder machine of the military and, as suggested earlier, giant tax deductions by corporations.  Tax evasion is rampant in the U.S., a partial list of which appears in Wikipedia; Baker cites a study by the IRS reported in the New York Times in 2006 demonstrating an escalation in high-dollar evasion.  It shouldn’t be a surprise that most evasion cases never reach prosecution.  What’s worse, as of 2006 thirty percent of federal taxes remained uncollected, meaning that if the evaders paid their fair share,

tax rates could be reduced
for everyone by twenty-five
percent, and the federal
government would have the
same amount money.

By contrast, if all TANF recipients, as the nanny state supporters like to suggest, got jobs and got off the government dole, we could reduce our tax burden by a whopping 1.4%.  The conservative nanny state mythology appears more and more to be a carnival mirror of stupidity.

More recently, Trump has stumped for lowering the corporate tax rates, arguing as expected that the current burden is overwhelming to American companies.  And yet the assertion, like most parroted by Donald, is patently false, as documented in April by the Center on Budget and Policy Priorities.  Corporate profits are growing, and the rich are getting richer.  How would reducing a largely unpaid tax burden help working people?  It’s worth remembering the the top individual income tax rate in 1944 was ninety-four percent for earnings above $200,000, or $2.5 million in 2017 dollars.  Innovation, economic growth, and a vibrant technology sector generated by state spending were humming along nicely.

Baker points out that this nanny state gentlemen’s agreement on evasion doesn’t extend to filers requiring the Earned Income Tax Credit (EITC), discussed by the New York Times; audit rates are readily available by income level from the IRS, documented by USA Today.  The gist is that twenty percent of those filing for the EITC receive requests for additional information, akin to a mini-audit.  By contrast, less that twenty percent of earners with income above $10 million ever receive an audit.

Baker continues with a discussion on internet sales, quite interesting in and of itself; suffice it to say that retail giants such as Amazon had escaped paying sales taxes because of ambiguities in managing purchases across state lines.  The public relations defense against self-disclosure was simply that the administrative burden was too high; this is patently false.  When I worked in Amazon Last Mile Logistics, we routinely handled varying jurisdictions in the company’s deliveries.  The complexity of operating in multiple geographies scales easily, as anyone familiar with the space should know.

Finally, he tackles the curious distinction between stock trading, casino gambling, and ordinary scratch-off and lottery tickets.  The taxation rates are astonishingly regressive, ordinary lottery wins being roughly thirty percent, casino gambling seven percent, and stock trading a brutal 0.003%!

Why is Private Better?

We’ve argued at length in previous posts about state capitalism, the economic system, despite all smoke and mirrors, under which we operate.  A pervasive argument of conservative pundits and nanny state babies is that private corporations can easily outperform public agencies because of waste intrinsic to their structure.  That is to say, without the pressure of profit mandates, shareholder backlash, and market principles, government agencies can profligately expend resources enriching themselves and preserving their positions.  By contrast, private organizations, we’re told, operate more efficiently with minimal largess.

I shouldn’t even have to quote statistics or studies to undermine this absurd notion, as anyone who’s ever worked in corporate America knows this simply isn’t true.  It isn’t to say that customer service, after a fashion, might be better in private agencies, as public agencies are generally quite underfunded, part of a scheme by conservatives to “starve the beast,” a notion to which we’ll return.  But this suggests not that all products and services are more sensibly driven by markets, as the destructive nature of markets is well-understood (and there would be no nanny state if this weren’t the case), rather perhaps customer service itself is better left to private organizations.

Donald Kettl, professor at the University of Maryland, penned an interesting op-ed in Excellence in Government, arguing a dual blame to so-called liberals and conservatives : liberals forgot to work out the details of their big ideas, and conservatives have actively, successfully fought to starve and dismantle the administrative state, an oft-mentioned strategy in connection to the recently fired Steven Bannon.  I’d disagree on some of the terminology, but Kettl correctly argues that the political left in this country ceased to operate among the political elites many decades ago.

Baker’s key arguments are that Social Security and Medicare operate on remarkably low overhead, as marketing and monstrous compensation packages for executives simply don’t exist.  He goes on to sketch an argument we’ve mentioned previously, that health insurance ought to be nationalized for the sake of the population.  As David Swanson so aptly put it, Americans can discover, oddly, that other countries exist, and that they’re leveraging universal health insurance programs, as Physicians for a National Health Program have long advocated.  We need not repeat all the arguments here, but as Noam Chomsky so often describes our current, fragmented joke of a system, it’s

an international scandal.
It’s roughly twice the per
capita costs of comparable
countries, and some of the
worst outcomes, mainly because
it’s privatized, extremely
inefficient, bureaucratized,
lots of bill paying, lots of
officials, tons of money wasted,
healthcare in the hands of
profit-seeking institutions,
which are not health
institutions, of course.

Considering, as we have previously, that virtually all technology which we take for granted originated in the state sector, and that no private agency would underwrite such long term investments, it should be glaringly obvious the role the nanny state plays in generating technology, then handing it off to private interests once it’s become marketable.  The nanny state mythology, astonishingly, convinces even highly educated people that the market somehow spins all of this from whole cloth.

Summary : Why A Nanny State?

In summary, Dean Baker’s book is an awesome read, filled with powerful arguments of which we can only scratch the surface.  He has many more recent works with additional facts and figures worth perusing, but The Conservative Nanny State is a primer for many a discussion on the proper role of government in the economy.

So why does the mythology tickle so many ears?  I grew up hearing so much of the rhetoric, and I’ll admit it seemed reasonable at the time.  With much research, I must confess the answers are quite disturbing.  As Chomsky mentions quite frequently, the rise of the public relations industry under Edward Bernays was a product of the remarkable success of the Department of Information (later the Ministry of Information) in convincing not particularly violent citizenry into warring against their white brethren in World War One.  Walter Lippmann and other premiere intellectuals of the day discovered that the power to “manufacture consent” was the only tool remaining in the toolbox, as violence eventually won’t work in an increasingly democratic setting.  Relegating the rabble, the “meddlesome outsiders” to passive spectatorship in policy and active villainy in war is a monumental achievement, and crucial to this effort is a series of scares, beginning with Wilson’s Red Scare, the propaganda around Cuba’s communist roots of the 1920s mentioned earlier, McCarthyism, and the like.  I can remember my uncle reminiscing about listening to records of Ronald Reagan during elementary and middle school, in which Ronnie explained that universal healthcare is a thing of the communists.  Of course, he neglected to mention that his government positions ensured glorious medical care well into his sad last days of Alzheimer’s dementia; unfortunately, he neglected to offer an appropriate avenue for poor white brethren to secure similar, reasonable old age accommodations, to say nothing of the black and brown.

The gist is that the conservative nanny state mythology is a remarkable feat of propaganda and avarice, designed effectively to persuade poor spectators into stumping for obscenely wealthy men with whom they’ll never associate.  Rush Limbaugh, one of the principal advocates for said state, has argued that income mobility ensures egalitarianism in our system.  Would that his variant of egalitarianism cure his stupidity.

The simplest explanation, as William of Ockham once suggested, might be correct.  Power and money has enabled an overclass to systematically hijack the debate, reframing policy discussions in their own image, just as is suggested in the Powell Memorandum.  As for the book, read it.

The Conservative Nanny State : A Book Review Part Five : Richly Bankrupt and Terrific Torts

In our penultimate article in the series on Dean Baker’s The Conservative Nanny State, we examine his discussion on bankruptcy, so-called tort reform, and “takings.”

Bankruptcy : A Nanny State Protection for Me But Not You, But Where is Personal Responsibility?

Bankruptcy has long been a feature of Anglo-American law, owing to creditors’ need for a lawful, orderly way of involuntarily dispossessing debtors, all merchants, of properties and freedom in the late sixteenth century.  In the United States, most bankruptcy laws passed within the first half of the nineteenth reflected this philosophy, exhibited in court battles wherein state-directed debt relief remained under debate.  With the ascent of the Whig party in the 1840 elections, the federal government established voluntary bankruptcy protections in an 1841 act; the government repealed the act a mere two years later, but the philosophy clearly was shifting.  By 1867, debts of the confederate states left northern states clamoring for more legislation.  It turns out that the many pushes for changes to bankruptcy laws often follow an economic downturn, generally at the request of large creditors; this ping-pong persisted well into the twentieth century, with repeal efforts following any slight accommodation for debtors.  In 1910, Congress offered corporations voluntary mechanisms for voluntary debt discharge, something fought vehemently by creditors hoping for harsher provisions.  For twenty years, the battle waged on, edging finally into the Great Depression during Herbert Hoover’s administration.  As expected, creditors and debtors alike rushed to the nanny state for new protections in light of unforeseen, devastating economic realities.  By 1938, sufficient support was available to pass the Chandler Act, named for its primary advocate Congressman Walter Chandler, Democrat from Tennessee, reviewed in an article appearing in The Fordham Law Review in 1940.  Though we’ll pass over the technical details, suffice it to say the Chandler Act represented a study-driven overhaul aimed at updating the Nelson Act of 1898.  For forty years, minor changes appeared here and there, until the passage of the Bankruptcy Reform Act of 1978, a culmination of ten years of hearings and studies, replacing the Nelson Act entirely.  In the years to follow, Congress continued adjustments here and there, representative of the dual difficulties in corporations and creditors fighting to further ensnare debtors while often suffering the same fate themselves.  The complete history is quite interesting, and one can find a worthy read in Charles Jordan Tabb’s The History of the Bankruptcy Laws of the United States.

Dr. Baker’s discussion focuses more on the most recent overhaul of bankruptcy law, the constructively-named Bankruptcy Abuse Prevention and Consumer Protection Act of 2005.  Nanny state apologists suggest individuals who file for bankruptcy are irresponsible spend thrifts who deserve to suffer, but shareholders can escape such difficulties through mechanisms described above.  Well understood is that medical bills sit nicely in the plurality of causes, as described in a report in 2013 by CNBC.  Baker frames the issue quite effectively, describing the large jump in total credit card debt from $100 billion in 1980 to $800 billion in 2004, the time of his writing.  Value Penguin reports more recent statistics gathered from the Census Bureau and the Federal Reserve, exhibiting a peak of $900 billion at the time of the financial crisis, a slump, but later exceeding the 2004 level as of 2016.  Baker argues that the explosion of this kind of volatile debt indicates

that the risk of default on these
loans was not a serious obstacle
to credit card lending[.]

Further, according to Jeremy Simon while writing for CreditCards.com , the so-called bankruptcy reform passed in 2005, misleadingly called “Bankruptcy Abuse Prevention and Consumer Protection Act,” increases the deluge of credit offers to recent filers of bankruptcy : the longer waiting period and tighter restrictions for a subsequent filing offers these sharks the opportunity to bathe in the blood of consumers.

Baker attacks the absurd bankruptcy reform from a different perspective : first, true proselytes of the free market should not want government protection for lenders who make bad choices; in a free market, they would naturally default themselves.  Second, this very protection expands conservative nanny state’s role in the economy rather significantly by empowering it further as a debt collector, contravening further the argument that smaller government is a genuine objective.  As suggested above, the leap in national credit card debt in recent years can’t possibly follow the capacity for repayment, so these lending institutions generally need not concern themselves with stated objectives of offering credit; upon failure to collect bad debts, they can, as Noam Chomsky says, “run cap in hand to the nanny state.”  The bankruptcy bill is one such startling example, though the ugliest hypocrisy of all followed with the bank bailout during the financial crisis : banks need the government to help them crush consumers, but when they run aground, they require a big, powerful state to save them.

Any discussion on bankruptcy leads to consideration of the International Monetary Fund (IMF), a financial agency designed to protect financial institutions in international exchanges.  Baker describes some of the history, particularly how the IMF originally regulated exchange rates under the Bretton Woods system until 1973; the IMF thereafter played the role of international debt collector.  We’ve discussed Bretton Woods before, an international framework designed by Harry Dexter White and John Maynard Keynes in 1944 to prevent repeats of the Great Depression.  For nearly thirty years, the United States experienced tremendous economic growth with no recessions.  In 1971, Richard Nixon eliminated the dollar’s status as a commodity currency, or currency based on gold, transmuting our bills into fiat currency, or currency by governmental decree.  With that and other unilateral decisions in what historians call Nixon Shock.  Though a more thorough treatment of the history of Bretton Woods is instructive (see Chomsky’s discussion), suffice it to say that both purposes of the IMF serve at the pleasure of the nanny state, though the latter day purpose as debt collector serves the financial sector more directly.  Free of the Bretton Woods regulatory apparatus, the financial sector has become extremely wealthy with unrestricted flow of capital diminishing regulation.  The IMF, to Baker’s point, imposes harsh austerity (discussed in a previous post) on nations if they refuse to meet terms imposed by creditors; that is, the IMF protects collectively foreign investors, much like that institution we’re taught is so destructive : the union.  Baker says it best, arguing that

[i]n a free market, there is no place
for a supranational institutional
like the IMF to rewrite the rules to
ensure that creditors are protected.

In a more competitive environment, any creditor could loan any nation needed funds, easily undercutting adversary firms with lower interest rates.  Creditors instead unionize through the IMF to drive nations into bankruptcy.  Baker argues that risk is the business of lenders, and they should suffer the consequences for making bad choices.

Fat Lawyers Gave You McDonald’s Coffee Lawsuit

Baker takes up the topic of tort reform, a favorite windmill the quixotic chicken-littles of the nanny state frequently fan in the faces of the public.  We’re told frequently that greedy lawyers and ne’er-do-wells are robbing hardworking industrialists blind, and that the nanny state must artificially curtail the requisite damages paid by these innocent business elites.  It’s most reminiscent of Ronald Reagan’s inane, racist complaint that “welfare queens” are driving “welfare Cadillacs”.  In fact, the conservative nanny state caters to a host of fascinating topics losing the good hard-working conservatives hours of sleep at nightly, including criminal innocence-by-insanity, lazy people loafing off disability benefits, and, most recently, insistence that illegal immigrants are committing vicious, hideous crimes, a blatant and highly destructive lie repeated ad nauseum by Donald Trump.  It turns out, quite expectedly for anyone willing to devote a paltry few minutes to research, that none of these would-be blights on society actually exist to any appreciable extent.  In fact, tax fraud by wealthy elites is a far more pervasive problem than any of the strawmen aforementioned.

And yet these myths leave an indelible imprint on the impressionable minds of the nanny state’s protectors.  Take torts for instance; Baker describes two stories I remember growing up hearing, the black woman who sued McDonald’s for burning her with coffee, and the story of a property owner sued by an intruder who was injured on the owner’s property in the course of a burglary.  Astonishingly powerful is the propaganda surrounding the cases, as we in a poorer segment of society literally would fall over ourselves to defend the honor of McDonald’s and this property owner.

I happened upon the McDonald’s case again during a legal presentation at Southern Methodist University; the legal scholar offered a piece of the case I hadn’t heard in my household : the woman only asked for medical coverage from McDonald’s, as they had received hundreds (more likely thousands) of complaints through the years that coffee served at 180 degrees Fahrenheit is dangerously hot.  Baker points out one more piece of the puzzle : McDonald’s served their coffee at such a high temperature to mask the bad taste of a cheap brew, thereby increasing profits while distributing the cost to burned consumers.  Again, this is reminiscent of the Ford Pinto case we discussed previously.

The Consumer Attorneys of California offer a good read on the McDonald’s case; suffice it to say a 79 year-old woman spilled the coffee on her thighs, burning herself so badly that she required skin grafting.  McDonald’s, putting customers first, refused to help her until a court compelled them to make up for their mistakes.  The case of the burglary really was about a high school student climbing on the roof of a gym on school property, and a skylight, painted over, gave way when he stepped on it.  A court correctly asserted that public facilities ought to have better protections in place.

Both of these cases are rare instances in which a court awards damages for torts, or wrongs leading to civil liability.  It turns out that less than three percent of civil cases ever lead to a jury trial, as most are decided much earlier, generally through settlement.  It’s revealing to consider a favorite of the tort reformists, medical malpractice.  In the last four decades, tort reform aimed at streamlining the malpractice liability system has managed to shift larger and larger profits into the pockets of insurance companies; Kenneth Thorpe, professor at Emory, published an article in Health Affairs discussing trends in states adopting caps on medical damages, finding a statistically significant decrease in premiums but inconclusive on whether the liability system is genuinely deterring substandard care.  Further, it might come as a surprise that few victims of malpractice actually sue; a Harvard study published some years ago found that only one in eight victims ever leverage the court system.  More recent work appearing in Medscape suggests the number is closer to one in twelve, and that doctors have at their disposable proven means of reducing the probability of lawsuits.  Interestingly, members of my family have had opportunities here and there to sue for malpractice, yet they never did, often citing the “litigious” nature of society, a win for propagandists.

Baker continues the discussion with a partial explanation of the more general costs associated with the current legal system, and that standardizing law and removing much arcane procedure could drive down prices.  But he contends, I think correctly, that limiting fees for lawyers’ services contravenes market ideology.  Fighting corporations is nasty business, as anyone who’s ever had to deal with a medical insurance company knows.  And despite what nanny state conservatives may tell us, the deck is very heavily stacked in their favor.

He also points to the importance of punitive damages, in that suing and punishing a corporation for endangering the public is, in fact, a public service.  It’s hard to even quantify the damage done by McDonald’s broiling hot coffee policy, all in the name of profits.  I’m reminded of all the time one waits on hold when trying to reach customer service for any company, be it cell phone providers, internet providers, or, as mentioned before, insurance companies.  In the interest of profits, these companies understaff their departments, using badly recorded music and automated menus to delay customers for several minutes, sometimes hours.  These hidden costs, or externalities, don’t directly figure into their budgets, as someone else pays that price.  Punishing them for bad service seems perfectly in keeping with market ideology.

Takings : Gimme More, Take Less…

Baker ends the chapter with a short discussion on “takings,” or costs exacted by the government in exchange for property confiscation or laws and regulations which reduce the value of property.  That is, so-called property owners, or corporations, might be quite unhappy when the government enforces regulation limiting how much they can pollute on their property, perhaps cutting down profits or lessening the value of owning the property.  And yet, when government intervention substantially increases the value of property through infrastructure and habitat clean-up, property owners happily accept the benefits without a direct repayment to the taxpayer.  For instance, farmland along the major interstate near my hometown, Interstate 35, was not particularly valuable before the interstate was constructed.  Commercial zones along the interstate are quite a boon for landowners, as gas stations become quite important along long stretches of highways.

The major point here is that nanny state conservatives dislike any regulatory action diminishing property value but freely accept every last penny they can bilk from beneficial government action.  Baker nicely suggests that true devotees of market ideology ought to accept freely that lessening of property values due to government intervention is a cost of doing business, and if they were savvier customers, they’d have foreseen it, harkening to the dogma of personal responsibility they hold so dear.

Next time, we’ll conclude this series with a brief summary of Baker’s discussion on small businesses and taxes.

Marking a Solemn Week in A Sea of Solemnity

This week marks the seventy-second anniversary of an event showcasing both the ascent of the human species to the top of the evolutionary ladder and its descent into what could be the darkest and final chapter of our roughly 200,000 year run on this planet : the bombing of Japan by the United States with nuclear weapons.

On August 6, 1945, the United States Air Force deployed the atomic bomb over Hiroshima, incinerating a few thousand acres of densely populated city, killing anywhere from 70,000 to 100,000 people in the blast; perhaps another 70,000 died from exposure.  On August 9, the U.S. continued by dropping a plutonium bomb on Japan over the city of Nagasaki, killing maybe 40,000 instantly and another 40,000 from the aftermath.  American apologists offer that these mass murders were essential in ending the Second World War while minimizing Allied casualties.  Certainly, that’s what I learned growing up, the pertinent question being whether this is true; it wasn’t until I took world history under Dr. Pat Ledbetter, longtime activist, jurist, and professor, that I ever heard the decision to deploy the atom bomb against Japan come into question.

Quite relevant today is Donald Trump’s quite harsh rhetoric toward the nation of North Korea as reported by the New York Times.  His outrageous words,

[t]hey will be met with fire
and fury like the world has
never seen[,]

as usual exhibit the uncensored, grotesque gaffes we’ve come to expect from him.  They also eerily echo similar words by Harry S Truman, president at the conclusion of the Second World War :

[the Japanese can] expect a
rain of ruin from the air,
the like of which has never
been seen on this earth.

The parallel may have been on purpose, as Trump seems to fancy himself the most accomplished president of our time, and Truman, in Americana, is widely regarded to have successfully ended the single most destructive conflict in history.  Trump can rest at ease spiritually, according to “faith leader” Robert Jeffress : contravening Romans chapter twelve’s directive to refrain from repaying evil for evil, he suggests that God’s instructions don’t apply to the government, and thus this same, loving “god” has bestowed upon Trump license to obliterate North Korea.  Certainly some hearts, are indeed, “desperately wicked.”

Though the philosophies of extremist devotees of Trump might not be all that surprising in their rapacity and blood-lust, the the claim that the atomic bombs were necessary to save American lives at the conclusion of the second world war, is, in fact, propaganda.  It turns out that the Japanese had suggested a surrender months before the bombs landed, asking only that they keep their emperor, largely a figure head and cultural symbol.  Washington refused, despite General Eisenhower, among others, urging Truman that

it wasn't necessary to hit them
with that awful thing … to use
the atomic bomb, to kill and 
terrorize civilians, without even 
attempting [negotiations], was a 
double crime[.]

Additionally, Admiral William Leahy, Truman’s chief of staff, apparently argued that

[t]he use of this barbarous weapon…was
of no material assistance in our war
against Japan[;] [m]y own feeling was
that in being the first to use it, 
we had adopted an ethical standard common 
to the barbarians of the Dark Ages [...] 
I was not taught to make wars in that 
fashion, and wars cannot be won by 
destroying women and children.

The Nation suggested in an investigative report released on the seventieth anniversary of the bombings quite accurately that we Americans need to face the ugly truth that the war was ready for a bloodless conclusion before Truman ordered the mass execution of hundreds of thousands of people.  Military head after military head uniformly agreed that the bombing was unnecessary, raising the more serious question of why one would wreak such horrendous havoc unnecessarily on civilians, and why no one exacted a political price for it.

One can easily point to an incredible misinformation campaign demonizing the Japanese as subhuman, feral monsters, documented by Anthony Navarro in A Critical Comparison Between Japanese and American Propaganda during World War II.  He offers a critique of both sides, but the imagery is striking.  Lingering resentment about Pearl Harbor eased propagandizing Americans, despite the attack being retribution for America freezing supply lines in Manchuria and conducting war exercises a few hundred miles off the coast of Japan, facts conveniently missing from the American consciousness.  We Yankees, perhaps, simply didn’t think the Japanese deserved to live.

It’s reminiscent of the euphoria when Barack Obama announced that Osama bin Laden was dead, murdered by a special operation in Pakistan which incidentally risked nuclear war; elite media and governments alike believed murder of a suspect without a trial was a monumental achievement, documented on Wikipedia‘s summary of official statements.  It seemed lost on interested parties that constitutional protections, inherited from Magna Carta, simply don’t matter in certain cases where the state deems them unnecessary.  I myself was stunned at the hysterical outpouring of happiness on Facebook and other social media.  I found myself nearly alone asking whether the dissolution of basic human rights in the case of a defenseless suspect made any sense.  It’s true that if he were actually guilty of masterminding the terrorist attacks on September 11, 2001, his was a vicious, malevolent crime.  But then again, Harry S Truman, Dwight D. Eisenhower, John F. Kennedy, Lyndon B. Johnson, Richard Nixon, Ronald Reagan, George H.W. Bush, and George W. Bush committed atrocities, uncontroversially, so far off the spectrum by comparison that it’s impossible to even imagine, documented by Noam Chomsky.  Standing next in line are Barack Obama with the drone assassination campaign, Bill Clinton in Serbia, and, yes, even dear Jimmy Carter in complicity in the Indonesian invasion of East Timor under Suharto, documented by Joe Nunes.

In any case, historian Hanson Baldwin argued in The Great Mistakes of the War that Washington’s “unconditional surrender” demands needlessly cost lives and lengthened the duration of the war; he wrote

[b]ut, in fact, our only warning
to a Japan already militarily
defeated, and in a hopeless
situation, was the Potsdam demand
for unconditional surrender issued 
on July 26, when we knew Japanese
surrender attempts had started.

Even the conservative Mises Institute editorializes that the bombing was one of the greatest crimes ever committed; John Denson argued in The Hiroshima Myth that the bombing was knowingly unnecessary.  In a more recent article, Ralph Raico continued the critique with a quote from physicist Leo Szilard, one of the originators of the Manhattan project :

[i]f the Germans had dropped atomic
bombs on cities instead of us,
we would have defined the
dropping of atomic bombs on
cities as a war crime, and we
would have sentenced the Germans
who were guilty of this crime to
death at Nuremberg and hanged them.

Dr. Szilard was making the obvious point that what evils others do seem to resonate while our own crimes either languish in the vat of forgotten history or simply cease to be crimes.  I’ve long argued that if Hitler had won the war, we would have eventually either forgotten his crimes or exalted them; after all, isn’t this precisely what we’ve done with Truman and the atomic bombs, Jackson and the Trail of Tears, Washington and the extermination of the Iroquois in the Sullivan expedition, and so on.  At worst, state apologists would argue that these events, like the tragedies of the Vietnam and Iraq Wars, were perhaps strategic blunders rather than the more deserved casting of “fundamentally immoral,” a description with which 52% of Americans surveyed in 1995 by Gallup agreed; that of course requires the events to even remain in public consciousness.

Returning to the atomic bombs dropped in 1945, Japanese historian Tsuyoshi Hasegawa summarized a lengthy search through official Japanese records, communiques, and memoranda in a 2007 article appearing in The Asia Pacific Journal, titled The Atomic Bombs and the Soviet Invasion: What Drove Japan’s Decision to Surrender?“,

what decisively changed the views
of the Japanese ruling elite was
the Soviet entry into the war [...]
[i]t catapulted the Japanese
government into taking immediate
action [...] [f]or the first time,
it forced the government squarely to
confront the issue of whether it
should accept the Potsdam terms.

That is, the overwhelming evidence is that the Japanese military elites acceded to the Potsdam requirements because of fear of Soviet aggression, further undermining the assertion that the nuclear bombs ended the war.  The hideous irony is that the Allied forces permitted Japan’s emperor to remain in place at the time of surrender, the only condition the Japanese leaders required in their earlier attempts.

The historical question is whether the Japanese really would have surrendered; I’ve unfortunately seen monstrous commentary online to this effect, suggesting that hundreds of thousands of lives were easily forfeit next to a demand made by the Allied leadership eventually tossed by the way side.   If there were even a chance for peace by accepting what really was a trivial request by comparison to the massive loss of life to follow, shouldn’t we, as activist David Swanson often suggests, give peace a chance?

Establishing that the dropping of the bombs wasn’t necessary to end the war seems academic; further, we know now the architects of said wanton decision were even aware it was unnecessary.  So why carry out such an action, as we asked earlier?  It turns out that the answer is akin to why a child might pull wings of of butterflies : just to see what happens.  Echoed later by Deputy Chief of Mission Monteagle Stearns in Senate testimony about escalating the bombing of civilians in Laos after Lyndon Johnson ordered a halt on the bombing of North Vietnam in 1968, the rationale boiled down to

[w]ell, we had all those planes
sitting around and couldn’t
just let them stay there with
nothing to do.

Further, Truman felt a display of force was necessary to place the tenuously-held alliance with Moscow on notice, intended to restrict the Soviet sphere of influence once the spoils of the Second World War became available, as Howard Zinn argues with much historical evidence in his final book, The Bomb.

The myopic jingoists over at The National Interest argue otherwise, suggesting the savage butchery of hundreds of thousands was an understandable price to pay :

would even one more Allied
death have been worth not dropping
the bomb, in the minds of the 
president and his advisors, after
six years of the worst fighting
in the history of the human race?

Tom Nichols goes on to argue that Truman would have faced impeachment if he’d revealed the existence of the bomb later to war-weary Americans, and that they would have thirsted for blood if they learned of a more expedient conclusion.  His argument is approximately the same as that from a propaganda piece from The Atlantic published in 1946, seventy years earlier : physicist Karl Compton argued, seriously if you can believe it, that the Japanese wouldn’t have ever surrendered, as a “well-informed Japanese officer” told him

[w]e would have kept on fighting
until all Japanese were killed,
but we would not have been
defeated[.]

Both arguments are absurd, as Americans can easily learn that a more expedient, less destructive conclusion was available as of May 1945, and yet only a few of us in the margins believe Truman should have faced a war crimes tribunal.  In a similar vein, the Taliban in Afghanistan offered to hand over Osama bin Laden, provided we offered him a fair trial and not continue to bomb their country.  Would they have?  We’ll never know, as Bush scoffed in his repulsive drawl, “We know he’s guilty.”  But then again, what is a couple hundred thousand Afghans, or 200,000 Japanese lives to America-first chauvinists, a question now coming to haunt us with Trump’s incisive, menacing rhetoric?

As we’ve discussed previously, nuclear war is one of two existential threats looming over human civilization, both of which the Republican party has committed to accelerating : escalate both ecological catastrophe and the growing atomic maelstrom.  Trump’s threats toward a small nation with whom we can genuinely pursue peace imperils millions of lives and risks war with both China and Russia.  Our series on Cuba aims to demonstrate that harsh sanctions, imperialism, and aggression universally backfire, as one can see with one example after another in our history, and to further expose the many near-misses the nuclear age has wrought on a hapless species, many of which appear in The Bulletin of Atomic Scientists, gatekeepers of the Doomsday Clock.

So during this solemn week, let’s remember that history can repeat itself if we allow it.  We Americans can stop Trump and the warmongering political elites, if only we organize and resist.  Some decent references on getting involved to move us to a nuclear-free world are Waging Peace, the Campaign for Nuclear Disarmament, and the Simon’s Foundation.

We’ll close with words from the only officially recognized survivor of both nuclear blasts, Tsutomu Yamaguchi :

[t]he only people who should
be allowed to govern countries
with nuclear weapons are mothers,
those who are still breast-feeding
their babies.

 

The Conservative Nanny State : A Book Review Part Four : Demonized Unions and Glorified Patents

Continuing our series analyzing Dean Baker’s The Conservative Nanny State, we’ll touch on a few key features quite effective in funneling wealth upward with no obvious systemic advantage : undercutting of collective bargaining and bestowal of monopoly status for intellectual property.  Baker argues, astutely, that neither of these features really make sense in a free market system, as collective bargaining is a market-based strategy for assuring at least a living wage for tradespersons vying for limited jobs, and government-conferred monopolies are illogical when producing, say a life-saving drug, is incredibly cheap.

Repeat After Me : Unions are Evil, Unions are Evil…

Baker touches briefly on elite hostility to organized labor for mid-to-lower income tradespersons, arguing that it’s an important feature of the conservative nanny state.  It’s certainly easy to see why, as trade unions, as we’ve discussed previously, generated most of the benefits we derive from employment, including paid holidays, vacation, healthcare, weekends off, and the like.  Yet the prevailing sentiment is often quite negative, as documented by Gallup since 1936.  Even in my own work experience have I witnessed the effects of this propaganda.  In working for the aforementioned defense contractor, I remember a strike executed by union members when the parent company chose to slash benefits.  Coworkers scoffed at and mocked the picketers, bemusing of the scabs and the internal contortions to cover the labor loss.  I heard internally that an upper level manager actually physically assaulted one of the picketers after a heated exchange.  The strike failed, the union workers sustained a more undesirable benefits package than had been offered previously, a remarkable victory for anti-unionists among the elites.

My own personal experiences in corporate America offer further revealing data regarding elite hostility toward unionization : both in working for corporate Uber and Amazon, I encountered many of the low wage employees (dubiously mislabeled as free contractors) among the drivers, cabbies in the case of Uber and delivery drivers in the case of Amazon.  I met probably seventy drivers while working for Uber, as the company would spring for free Uber rides home if I remained in the office past ten o’clock at night.  Though the drivers were understandably reticent to discuss with me, a corporate employee at the time, their opinions on Uber’s downward pressure on their wages, I generally could ease them into opening up after I shared the long labor history of America with them.  The picture was universally bleak : living, breathing people trying to survive sharp increases in the cost-of-living in San Francisco found themselves in a harsh, highly competitive trade with a quite hostile corporate sponsor.  Uber routinely would fire drivers with little or no warning, all based on a very arbitrary rating system with very little means of disputing a bogus negative rating.  Uber also sharply cut wages on these drivers.  The picture among Amazon drivers was very similar : no benefits and fast firings were the law of the jungle, true even in more liberal democracies such as the United Kingdom.  I informed virtually all of these drivers I met that the only proven means of driving wages upward is collective bargaining through unionization, something the drivers tell me Uber harshly demonizes; see The Verge for a discussion on Seattle’s efforts to protect Uber drivers.

America’s sordidly violent labor history features an unusually sharp hostility toward trade unions for semi-to-unskilled labor, as they are harmful to profits.  A rather salient piece to the puzzle is the National Labor Relations Act (or Wagner Act) of 1935, conferring the right of private sector employees to organize unions and participate in collective bargaining; the National Labor Relations Board received special attention during my Uber employee orientation, as one of the chief legal officers lambasted the committee as desperate bureaucrats hell-bent on squeezing money out of the innocent drivers.  In remarkably effective legalese rhetoric, she argued that the NLRB is out-of-touch and irrelevant in a world where Uber drivers can nab a fortune in driving, thus, it’s a charity to classify drivers as contractors.  Though she aptly described the experience some of the earlier “contractors” enjoyed, an unnervingly large fraction of latter-day drivers never managed to attain this golden driver’s seat.  Certainly, Uber represents something of a revolution in ride-sharing, but why not support one’s workforce?

Returning more to the historical context, the Taft-Hartley Act of 1947 outlawed secondary strikes, strikes instigated by workers of one trade expressed in solidarity with another trade’s ongoing strike.  You read that correctly : a painter’s union cannot legally strike in solidarity with carpenters participating in a union strike.  Though there is much to discuss on the topic of organized labor (and we’ll touch briefly on a few of Baker’s further points momentarily), suffice it to say the corporate nanny state mythology somehow manages to convince highly-compensated workers that not only is labor solidarity unnecessary (the market argument), but that they themselves derive no protectionism from said nanny state or any other well-to-do analog of the trade union, the former of which is a remarkable feat of propaganda, the latter of which Baker quite powerfully decimates as we discussed earlier.

Patent Trolls and Copyright Cows : The Geese Laying Golden Eggs

Baker turns attention to two extremely powerful, state granted protections for individuals and corporations : patents and copyrights.  Again, conservative nanny state apologists might consider these instruments to be laws of nature, naturally forming optimal strategies in the fantasy land of free markets.  By contrast, Baker aptly describes them correctly as “government-granted monopol[ies].”  That is, an agency, be it individual, government, non-profit, or corporation, can apply for patent or copyright protection on an invention, idea, artistic expression, and so on, ensuring that agency time-limited monopolistic control over usage and sales.  The argument in favor of these anti-market practices is that they encourage innovation and creativity, generally socially positive notions.  In fact, the power derives directly from the U.S. Constitution : under Article I, Section 8, we have that Congress has the power

[t]o promote the Progress of Science and
useful Arts, by securing for limited 
Times to Authors and Inventors the 
exclusive Right to their respective 
Writings and Discoveries.

This power owes to the guild and apprentice system from the Middle Ages, Baker explains, as a means of increasing innovation and scientific discovery.  Yet, are these the most optimal means of doing so?  Certainly, executives of Merck, Pfizer, Apple, Google, Amazon, and a lengthy list of other companies are quite wealthy.  But do these state-guaranteed monopolies efficiently generate innovation?  My own background includes an understanding of the evolution of software development, and the open source standard (free and open to the public) has grown tremendously in popularity in recent years.  Well-known to software developers is the superior reliability in Unix-based operating systems relative to that of proprietary models.  It’s reasonably understood history that the biggest software firms in large part owe their success to IBM’s PC open architecture strategy, suggesting an open OS standard could have created a proliferation of competitive products in both basic kernel (OS) space operations and those in the user space.  Though we have many advances now in personal computing, much of the game-changing advancement has occurred either in the state sector (discussed in previous posts) or in highly competitive, less monopolistic settings.

Baker describes an interesting economic parallel : dead-weight loss is the difference between patent-protected and market-based prices, though he scoffs that his fellow economists find no fault with this loss with respect to pharmaceutical prices, despite their hostility toward the same loss incurred in tariffs.  Technical economics aside, Baker poses the critical question : are patents and copyrights the most optimal instruments of their kind for encouraging and rewarding innovation?

To answer the question, Baker points to a highly controversial beneficiary of the patent system : the drug research lobby.  If we are to believe conservative nanny state apologists, he argues, the patent system should be the most capable protection in assuring innovation in medical advances and lifesaving technology.  Patents account for a factor four multiplier in drug costs, meaning if a generic costs one dollar, the corresponding brand-name drug costs four dollars, according to the final Statistical Abstract of the United States, the 2012 edition.  (We could discuss the highly politicized, stupid decision to discontinue this long running report published by the U.S. Census Bureau, but we’ll defer for now.)  As of the publishing date of the book, the factor was three, meaning the divide has grown by thirty-three percent.  Pharmaceutical companies offer exactly the argument as described above, despite large fractions of profits wasted on marketing and executive salaries.  Overall, Baker reports $220 billion in drug sales in 2004, confirmed by the aforementioned report.  By 2010, this number grew to nearly $270 billion.

Because patent protection ensures higher drug prices than could otherwise be paid, literally millions of Americans each year skip medications to save money.  Harvard Health Publications reported in 2015 cites a survey by researchers Robin Cohen and Maria Villarroel that eight percent of all Americans fail to take medications as directed because of lack of money.  As expected, older and less well-insured Americans missed dosages in higher numbers, but astonishingly, six percent of Americans with private insurance skimped on their medications.  That is to say, the private insurance system, adored by conservative nanny state apologists, forces Americans further into poverty and costs too much.  A report in 2012 by The Huffington Post indicates that these pharmaceutical companies spend nineteen times as much on marketing as they do on research, suggesting that the huge windfall of patent protection isn’t really going to good use.

Baker points to an even more serious consequence of artificially ballooning prices : black market drugs.  A strategy comparable to “medical tourism,” discussed earlier, leads Americans to order potentially dangerous drugs from foreign countries.  This steady flow of both illegally and legally obtained medicines is completely expected under a system in which these millions of Americans self-report failing to take drugs for lack of money, a failure of the patent system.

Perhaps most damning is Baker’s argument with regard to copycat drugs, or drugs designed to mimic the behavior of a patented, available drug.  Pharmaceutical companies have discovered that hitching themselves onto bandwagons of popular, patent-protected drugs of high import (such as allergy, diarrhea, and heartburn medications) is extremely lucrative.  That is, rather than invest money and energy on new lifesaving drugs and technologies, they try to replicate something in the mainstream by tweaking a few formulas.  As of 2004, two-thirds of all newly approved drugs in America were copycats, according to the Food and Drug Administration.  That leads to a startling number with regard to where the research money goes : sixty percent of research dollars goes to such wasteful creations.  So sixty percent of medical dollars, private and public, do not promote innovation at all, because of the patent system.  Other inefficiencies of said system appear in a 2015 report by BBC : for instance, many drug companies employee “floors of lawyers” to fight in court for patent extensions, a strategy interestingly called evergreening.  Dr. Marcia Angell, former editor for The England Journal of Medicine, discussed in The Canadian Medical Association Journal drug companies copying their own drugs for patent extensions, an example being Nexium and Prilosec developed by AstraZeneca : the drug company hiked the price on the outgoing to migrate patients onto the incoming, hoping to retain marketshare once the patent expired on the outgoing.

The aforementioned pair of drugs are examples of enantiomers, or drug molecules equivalent in structure and form, one a mirror image of the other.  These arise naturally in the course of development, often with very similar physiological interactions; thus, the practice of patenting both separately is rather suspect.  In “Enantiomer Patents: Innovative or Obvious?” appearing in the Pharmaceutical Law & Industry Report, Brian Sodikoff, et al. discusses the legal standards in doing so, suggesting the patent system overly caters to the corporations.  A few other examples of double-dipping are Lexapro and Celexa, and Ritalin and Focalin.

It turns out that drug companies leverage several tricks in the spirit of the foregoing to stretch the lifetimes of patents, including

  • rebranding mixtures of existing drugs, such as Prozac and Zyprexa to obtain Symbiax,
  • morphing generic drugs into new drugs by adjusting dosages, such as Doxepin into Silenor,
  • repackaging an existing drug as is for a new purpose, such as Wellbutrin and Zyban, and Prozac and Sarafem,
  • creation of extended release variants of existing drugs by established mechanisms, such as Ambien and Ambien CR, and Wellbutrin and Wellbutrin XL,
  • changes of delivery mechanisms, such as Ritalin as a pill and Daytrana as a topical patch,

among others.  In each of these cases, big pharma manages to hike the price substantially, even when cheaper generics are available with adjustable dosages.  These corporations argue they should receive full patent protection as though they devoted the same amount of resources for researching the copycat as they did for developing a brand-new therapy from scratch, a preposterous claim. What’s worse, drug reps, or prettified agents armed with high discretionary credit routinely accost physicians, offering expensive samples and lavish luncheons for free; NPR reported earlier this year that the drug rep interaction significantly increases the number of costly prescriptions written by doctors.  Though we could discuss these inefficiencies and contradictions more, we’ll leave it at that.

By the previous arguments, we certainly can begin to believe that patents and copyrights probably aren’t the most efficient means of promoting innovation, as Baker correctly asserts.  So how does one promote innovation?  Baker suggests raising government investment in research, establishing a grant and prize system aimed at spurring innovation.  Researchers would strive toward successful development of lifesaving medical technology, competing jointly for grants to fund their work.  Upon successful innovation, they could receive prize money commensurate with the societal benefit.  Upon acceptance and approval, their contributions would become public domain, so drug manufacturers could compete on the open market for the cheapest way to produce the drugs, much like application developers could leverage IBM’s open architecture.  As Baker observes, this isn’t the only approach, but it certainly is worth trying, considering the current system is so remarkably wasteful.  Since the government confers the patents and copyrights for the public good, the government could ostensibly leverage other instruments to promote “the Progress of Science and Art.”

Next time, we’ll consider Baker’s arguments on bankruptcy, torts, and takes.

Shyam Kirti Gupta and Shyam Kelly Gupta contributed to this article.