Algo-Stats Special : On the Responsibilities of Technologists

In April of last year, I wrote a piece on the growing crisis in the misapplications of technology in my line of work: algorithms, machine learning, optimization, and data science.  It seems appropriate to include the discussion here on my activist blog.



Slagle, N.P. “On the Responsibility of Technologists : A Prologue and Primer,” Algo-Stats, 2018-04-15.

A special thank you to S. Kelly Gupta for invaluable suggestions, and to George Polisner and Noam Chomsky for taking the time to read an earlier draft and offer encouraging feedback.

A Casting Call for the Conscientious Data Practitioner

For some time now, I’ve planned on writing an article about the very serious risks posed by my trade of choice, data science.  And with each passing day, new mishaps, events, and pratfalls delay publishing, as the story evolves even as I write this.  For instance,  Mark Zuckerberg testified before the Senate Judiciary Committee this week, sporting a smart suit and a booster seat ostensibly to improve morale.  Though some interesting topics came up, the discussion was routine, with the requisite fear-mongering from Ted Cruz, the bumbling Orrin Hatch asking how money comes from free things (apparently he forgot to ask Trump about withholding pay from blue-collar contractors), and a few more serious people asking about Cambridge Analytica, such as Kamala Harris querying the lengthy delay in Facebook notifying users of Cambridge, and, surprisingly, John Kennedy panning Facebook’s user agreement as “CYA” nonsense.

The tired, public relations newspeak of the mythical well-meaning, self-regulating corporations accompanies happily the vague acknowledgements of responsibility around certain things we heard from Zuckerberg, along with references to proprietary and thus unknowable strategies almost in place.  And though I doubt Congress in its current state can impose any reasonable regulations, nor would those in charge be capable of formulating anything short of a lobbyist’s Christmas list, my intention here is to argue for something more substantial : a dialog must begin among technologists, particularly data practitioners, about the proper role of the constructs we wield, as those constructs are powerful and dangerous.  And it isn’t just because a Russian oligarch might want Donald Trump to be president, or because financial institutions happily risk economic collapse at the opportunity to make a few bucks; data has the power to confer near omnipotence to the state, generate rapid, vast capital for a narrow few at expense of the many, and provide a scientifically-sanctioned cudgel to pound the impoverished and the vulnerable.  Malignant actors persist and abound, but complacency among the vast cadre of well-intentioned technologists reminds me of Martin Luther King, Jr.’s discussion of the “white moderate who is more devoted to ‘order’ than to justice.”  So I must clarify that I’m writing not to the bad people who already understand quite well the stakes, but to my fellow conscientious practitioners, particularly those among us who fear consequences to career or suffer under the peculiar delusion that we have no power.  Consequences are real, but  we as technologists wield great power, and that power is more than additive when we work together.  The United States is unusually free, perhaps in the whole of human history, in that we can freely express almost any idea with little or no legal ramification.  Let’s use that freedom together.

A Lasting Legacy : Power and Responsibility

Fifty-one years ago last February, Noam Chomsky authored a prescient manifesto admonishing his fellow intellectuals to wield the might and freedom they enjoy to expose misdeeds and lies of the state.  Much of his discussion dwells on the flagrant dishonesty of particular actors as their public pronouncements evolved throughout the heinous crime that is the Vietnam War, and in more recent discussions, such as those appearing in Boston Review in 2011, describe the significant divide between intellectuals stumping for statism versus the occasional Eugene Debs, Rosa Luxemburg, and Bertrand Russell:

The question resonates through
the ages, in one or another
form, and today offers a
framework for determining the
“responsibility of intellectuals.”
The phrase is ambiguous: does it
refer to intellectuals’ moral
responsibility as decent human
beings in a position to use their
privilege and status to advance
the causes of freedom, justice,
mercy, peace, and other such
sentimental concerns? Or does it
refer to the role they are expected
to play, serving, not derogating,
leadership and established institutions?

We technologists, a flavor of intellectuals, have ascended within existing institutions rapidly, for fairly obvious reasons.  More specifically, those of us in data science are enjoying a bonanza of opportunities, as institutions readily hire us in record numbers to sort out their data needs, uniformly across the public, private, good, bad, large, small dimensions.  We’re inheriting remarkable power and authority, and we ought approach it with respect and conscience.  Data, though profoundly beneficial and dangerous, is still just a tool whose moral value is something we as its priesthood, if you will, can and ought determine.  Chomsky’s example succinctly captures how we should view it :

Technology is basically neutral.
It's kind of like a hammer.
The hammer doesn't care whether
you use it to build a house or
crush somebody's skull.

We can ascribe more nuance, with mixed results.

Data is Good? Evidence Abounds

I suspect I’m preaching to the choir if I remark on the impressive array of accomplishments made possible by data and corresponding analyses.  I believe the successes are immense and plentiful, and little investigative rigor is necessary here in the world of high tech to note how our lives are bettered by information technology.  Woven throughout the many successes, more subtly to the untrained eye than I or similar purists would prefer, is statistics, and the ensuing sexy taxonomy of machine learning, big data, analytics, and myriad other newfangled neologisms.  The study of random phenomena has made much of this possible, and I’d invite eager readers to take a look at C.R. Rao’s survey of such studies in Statistics and Truth.

I’m in this trade because I love it, I love science, I love technology, I love what it can do for you and me, and I’m in a fantastic toyland which I never want to leave.  So I must be very clear that I am no Luddite, nor would I advocate, except in narrow cases (see below), technological regression; the universal utility of much of what has emerged from human ingenuity has served to lengthen my life, afford me time to do the work I want, and make me comfortable.  Though the utility is so far very unevenly shared, I do believe we’ve made tremendous progress, and the potential is limitless.  So I’d entreat the reader potentially resistant to these ideas to brandish Coleridge’s “willing suspension of disbelief,” then judge for oneself.  My primary objective here is to begin a dialog.  Now for some of the hard stuff.

Data is Bad? There is Evil, and There Are Malignant Actors

Evils of technology also are innumerable, as the very large, growing contingency of victims of drone attacks, guns, bombs, nuclear attacks and accidents, war in general, and so on, will attest.  Surveying the risks of technology leaves the current scope long behind, but it’s worth paying attention to the malignant consequences of runaway technology.  I’ll be reviewing Daniel Ellsberg’s The Doomsday Machine on my other blog soon; suffice it to say the book is good, the story is awful.  The book is a sobering, meticulous analysis of the most dangerous technology ever created, and how reckless and stupid planners were in safeguarding said technology.   Here, we’ll stick just to problems arising from bad data science, and the bad actors, be it ideologues, the avaricious, the careless, or the malevolent.

We ought consider momentarily the current state of affairs : Taylor Armerding of CSO compiled the greatest breaches of the current century, attempting to quantify the damage done in each case.  Since the publication of his summary, the Cambridge Analytica / Facebook scandal has emerged, sketching a broad “psychographic” campaign to manipulate users into surrendering priceless data and fomenting discord.  Quite dramatically, a 2016 memo leaked from within Facebook shows executive Andrew Bosworth quipping,

In other words, “don’t bother washing the blood off your money as you give it to us.”  Slate offers an interesting indictment on the business model that has rendered the exigencies of data theft, content pollution, and societal discord concrete, imminent contingencies.  And most recently, Forbes reports that an LGBT dating app called Grindr apparently permits backdoor acquisition of highly sensitive user data, endangering users and betraying their physical location.  And the first reported fatality due to driverless technology deployed by Uber occurred in Arizona this month, generating a frenzy of concerns around the safety and appropriateness of committing these vehicles into the public transportation grid.  The reaction I noted on the one social media platform I use, LinkedIn, was tepid, ranging from despairing emoticons to flagrant, arrogant pronouncements that this is the cost of the technology.  I also observed a peculiar response to those unhappy about the lack of security around user data : blame the victims.  The responses vary from the above declaration of cost of convenience to disdain for the lowly users in need of rescue from boredom, discussed by one employee of Gartner, a research firm :

let's be honest about
one thing: we all agree that
we give up a significant part
of our privacy when we decide
to create an account on Facebook[;]
[w]e exchange a part of our private
life for a free application that
prevents us from being bored most
time of the day.

I’d refer this person to Bosworth’s memorandum, though he, like CNN in 2010, likely hadn’t seen it before venturing such drivel.  I interpreted their argument as a public relations vanguard aimed at corporate indemnification.  Certainly, an alarming number of terms and conditions agreements aim to curtail class action lawsuits and, where legal, eliminate all redress through the court system.  On its face, this sounds ludicrous, as the court system is precisely the public apparatus for resolving civil disputes.  Arbitration somehow is a thing, with Heritage and concentrations of private power reliably defending it as freer than the public infrastructure over which citizens exercise some control, however meager.   Sheer genius is necessary to read

[n]o one is forced into arbitration[;]
[t]o begin with, arbitration is not
“forced” on consumers[...] [a]n obvious
point is that “no one forces an
individual to sign a contract[,]”

and interpret it any other way than that the freedom to live without technology is a desirable, or even plausible arrangement; Captain Fantasticanyone?

Maybe it’s a question of volume, as catechismic, shrill chanting that we have no privacy eventually compels educated people write the utter nonsense above.  If one were to advance the argument further, it’s akin to blaming the victims of the engineering flaws in Ford’s Pinto; after all, the car rescues the lower strata of society from having to walk or taxi everywhere they want to go, and death by known engineering flaws is the cost of doing business.  The arrogance evokes Project SCUM, the internal designation for a marketing campaign tobacco giant Camel aimed at gays and the homeless in San Francisco in the 1990s.

Governments cause even greater harm, exhibited in Edward Snowden’s whistleblowing on the NSA’s pet project to spy on you and me, code-named PRISM.  Comparably disconcerting, Science Alert reported this week that the development of drone technology leaving target acquisition in the control of artificial intelligence is almost complete, meaning drones can murder people using inscrutable and ultimately unaccountable data models.  State-of-the-art robotic vision mistakes dogs for blueberry muffins in anywhere from one to ten percent of static images analyzed, depending on the neural network model, meaning a drone aiming at a muffin would destroy one to ten percent of the dogs mistaken, and this is training on static imagery!  Imagine the difficulties in a dynamic field-of-view with exceedingly narrow time windows necessary to overcome errors.  Human-controlled drones already represent enormous controversy, operating largely in secret without legislative or judicial review under the direction of the executive branch of the American government.  Who must answer for a runaway fleet of drones?  What if they’re hijacked?

More locally, Guardian recently unmasked the racist facial recognition models deployed by law enforcement agencies, bemoaning the existence of “unregulated algorithms.”  I’d wager the capability to reverse-engineer a machine learning model to steal private data receives great attention among adversarial actors and private corporations.  I can remember in my first job many years ago being in a discussion over an accidental leak of a few lines of FORTRAN to a subcontractor, to which I naively queried, “Why are we in business with someone we think would steal from us?”  A manager calmly replied that anyone and everyone would steal, and in any way they can.  Maybe it’s true, but I’d like to believe there’s more to countervailing passive resistance than meets the eye.  In any case, data science and artificial technology are tools co-opted for sinister and dangerous purposes, and we ought try to remember that.

Data is Ugly? Errors and Injustice, Manned and Unmanned

Data needs no bad actor or vicious intent to be misleading.  Rao refers to numerous unintentional examples of data misuse within the scientific record, peppered throughout the works of luminaries such as Gregor Mendel, Isaac Newton, Galilei Galileo, John Dalton, and Robert Millikan, as documented by geneticist  J.B.S. Haldane and Broad and Wade’s Betrayers of the Truth.  For instance, the precision Newton provided for the gravitational constant is well beyond his capacity to measure, and Mendel’s genetic models could explain the recorded data only with astronomical probability, suggesting either transcription errors or blatant cherrypicking.  Rao notes

[w]hen a scientist was
convinced of his theory,
there was a temptation to
look for "facts" or distort
facts to fit the theory[; t]he
concept of agreement with theory
within acceptable margins of
error did not exist until the
statistical methodology of 
testing of hypotheses was
developed.

That is, statistical illiteracy can only compound the problem of “fixing intelligence and facts around the policy,” to paraphrase the infamous Downing Street Memo.

Statistical literacy doesn’t guarantee good outcomes, even with honest representation.  Data can reinforce wretched social outcomes by identifying the results of similar failed policies of the past.  For instance, everyone knows African Americans are more likely to be harassed by police.  Thus, they’re more likely to be arrested, indicted, charged, and convicted of crimes.  Machine learning algorithms identify outcomes and race as significantly interdependent, and new policy dictates that police should carefully monitor these same people.   Asking why we ought trust an inscrutable model is unmentionable, reminding me that earlier propagandists invoked the “will of God” as justification for slavery, and later, the “free market” requires that some people be so poor that they starve.  Maybe elites always require some ethereal reason for the suffering we permit to pass in silence.  Anecdotally on racism, a myopic cohort once pronounced triumphantly to me that racists aren’t basing their prejudice on skin color, but on other features correlated with skin color.  The Ouroboros, or some idiotic variant, comes to mind.

Weapons of Math Destruction : Destructive Models

Cathy O’Neil in Weapons of Math Destruction (WMDs) ponders such undesirable social outcomes of big data crippling the poor and the disadvantaged.  Within the trade, dumb money describes the proceeds mined and fleeced from vulnerable populations.  The money poor people have ranges from real estate to be reverse-mortgaged, poverty and veteran status to leverage for education grants and loans, desperation of the poor in the form of title loans, payday loans, and other highly destructive financial arrangements.  Myriad examples of startups and firms abound, from for-profit online education firms like Vatterott and Corinthian Colleges targeting veterans and the poor to cash in on student loans, and their enabling advertising firms such as Neutron Interactive post fake job ads to cull poor people’s phone numbers to blast them with exaggerated ads.  Thinktank Learning, and similar firms model student success, helping universities and colleges game the U.S. News and World Report ranking system, a perfect example of a WMD.  Comstat and Hunchlab help resource-starved police departments profile citizens based on geography, mixing nuisance crimes with the more violent variant and strengthening racial stereotypes.  Courts rely now on opaque models to assess risk of convicts, determining sentences accordingly, according to a piece in Wired last year.  Ought we understand the reasons why two criminals convicted of the same crime receive different sentences?  The book is very much worth a read.  Her own journey is revealing, having been an analyst at D.E. Shaw around the time of the market crash.

Data has accumulated over the years that ETS’s prized Graduate Record Examination (GRE), a test required for candidacy in most American graduate programs,

  • has disproportionately favored the white, the rich, and the male, (sounds like a WASP daytime drama),
  • may not be all that useful for prediction, and
  • operates in darkness, inscrutably like many such “psycho-social” metrics.

My own personal experience with the examination is kind of interesting and comical : I’m apparently incapable of writing.  Being a south paw, my penmanship is atrocious, but I seem to remember having typed the essay… Kidding aside, acquiring feedback from them was impossible, and they led me to believe that the essay receives grades via an electronic proofreader.  I guess no one remained who could interpret the algorithm’s outputs.

A more serious question O’Neil raises is that machine learning models suffer many of the same biases and preferences born by their architects; I think of ETS reinforcing malignant stereotypes, a kind of “graduate ethnic cleansing.”  Algorithms running for Title Max target the poor, making them poorer still.  More seriously, what are these models trying to optimize, and is it desirable behavior?

The Problem of Proxies

O’Neil offers that part of the problem with building opaque data models to inform real world decisions is that the real world objective we’d like to improve is poorly proxied: unsuitable substitutes seem to be hogging the constraints.  For instance, how can an algorithm quantify whether a person is happy?  Happiness is something we all seem to understand (or think we do), and we can generally spot it or its shaded counterpart with little effort.  Millions of years have chiseled, then kneaded the gentle ridges of the prefrontal cortex to lasting import.  Algorithms might read any number of interesting features, and unlike consciousness itself, I suspect happiness, or at least its biological underpinnings, is something an algorithm could predict, but any definition suffers limitations.  My earliest intuitions in mathematics led me to believe that any state can be reproduced with sufficient insight into the operating principles.  Though the academy has largely reinforced what I used to call the “dice theory” (and I was all-too-proud to have dreamed it up myself), Galileo lamented centuries ago, as have others more recently, including Hume, Bertrand, and Chomsky, that the mechanical philosophy simply isn’t tenable.  More narrowly, we may be incapable as we are now to effectively proxy very important soft science social metrics.  I believe misunderstanding this may be fueling the insatiable appetite of start-up funding for applications lengthening prison sentences, undercutting college applicants, burdening teachers with arbitrary, easily falsified standards, bankrupting the poor, and harassing and profiling the most vulnerable.  Is society better off with young black men fearing to walk the street at night with the justified concern of being murdered?

A striking example of poor proxying is invoking the stock market as the barometer of the economy.  And this is something I see in social media time and time again.  Missing from the euphoria is that for nearly fifty years, the Gini index is positively correlated with the S&P 500, the former measuring economic inequality and the latter indexing the “health” of the stock market.  That is, as the stock market becomes healthier, the distribution of the money supply drifts away from the uniform.  Not coincidentally, this behavior seems to begin right around Nixon shock, or the deregulation of finance and the dismantling of Bretton-Woods.  In his 2004 book The Conservative Nanny State, economist Dean Baker discusses “perverse incentives” in maximizing incorrect proxies in patent trolling, wasteful copycat drug development, and the like.  The U.S. Constitution guarantees copyright protection to promote development of science, contravened by wasting sixty percent of research and development money on marketing and replicated research.

Even in a more seemingly innocuous setting, say social media, do we see deep problems in proxies.  Shares and likes become the currency of interaction, and social desirability need not interfere for most.  I’ve noticed in my own experiences in writing comments online that a frenetic vigilance overcomes me if I feel I’ve been misunderstood or have given the wrong sort of offense, as I’m (perhaps pathologically) hardwired to care about the feelings of others.  By interacting online rather than in-person, a host of nonverbal cues and information are absent, forcing us to rely on very weak proxies.  Psychology Today touched on this in 2014, and I suspect the growing body of evidence that flitting, vapid interactions online are damaging social intelligence demonstrates that the atomization of American culture is in no way served by social media.

Admittedly, the story seems dire, but belying the deafening silence is a groundswell of conscientious practitioners, fragmented and diffuse, but pervasive and circumspect.

The Courage to Speak

When I discuss any of the above with cohorts privately, a very large fraction agree on the dangers of misusing this technology; reflexive is incorrect habituated resignation, especially in America where illusory impotence reigns supreme. And so I see very little in the way of commentary on these issues from tradespersons themselves, though a handful from my network are reliable in discussing controversy.  Perhaps the psychology is simpler : is it fear of blowback and risks to career of the kind Eugene Gu is experiencing with Vanderbilt?  Certainly even popular athletes face blacklisting, Colin Kaepernick being an exemplar.  Speaking out is risky, but silence strengthens what Chomsky calls “institutional stupidity“, of which some of the above quotes embody.

The point I’m trying to drive home is that the responsibility of we the technologists demands an end to controversy aversion; we simply MUST begin talking about what we do.  Make no mistake, the ensuing void of silence emboldens demagoguery in malignant actors, such as the aforementioned projections on unmanned, computer-controlled drone warfare, further deterioration of the criminal justice system, exploitation of the poor and vulnerable, and wrecking the global economic system.  Further, refusing to speak out assures a platform for desperately irresponsible, dangerous responses of blaming or ridiculing the victims, a sort of grinding salt in the wounds.  Consider the extreme variant of the latter : Rick Santorum, Republican brain trust, has sagely admonished school shooting survivors to learn CPR rather than protest and organize to demand safety, and Laura Ingraham, shrill, imbecilic Fox host, has gleefully tweeted juvenile insults at one of the outspoken survivors.  Why would we relegate damage done by runaway data science as the cost of doing business, if we can clearly perceive the elitism and cynicism in the above?  Silence may seem safe, but is it really?  Ignoring sharpening income inequality, skyrocketing incarceration rates, and stratification and segregation has a cost : Trumps of the world become leaders, the downtrodden looking to demagogues.

The Coming Storm Following the Dream

With each public relations disaster and each discovery of flagrant disregard for users and their precious private data, we hurtle toward what I believe are an inevitable series of lawsuits and criminal investigations leading to public policy we ought to help direct.  C.R. Rao wrote some years ago regarding a lawsuit against the government failing to act to save fishermen from a predictable typhoon, plaintiffs’ chief issue being that the coast guard failed to repair a broken buoy :

[s]uch instances will be rare,
but none-the-less may discourage
statistical consultants from
venturing into new or more
challenging areas and restrict 
the expansion of statistics.
[emphasis mine]

The General Data Protections Regulation, or (GDPR), organized by the European Union, is perhaps one of the broadest frameworks ratified by any national or supranational body.  This coming May, the framework will supersede the Data Protective Directive of 1995.  The US government has regulated privacy and data with respect to education since 1974 with FERPA and medicine since 1996 with HIPAA.  Yet court precedent hasn’t yet determined the interpretation of these acts with respect to machine learning models built on sensitive data.  What will an American variant of GDPR look like?  Practitioners ought have a say, and the more included in the discussion, the better the outcome.  But this sort of direction requires coordination, and because of the unique and difficult work we do, we are fractured from one another and more susceptible to dogmatism around the misnamed American brand of libertarianism.  The American dream is available to technologists (and almost no one else), whence a rigidity of certain non-collectivist values, enumerated in a study conducted by Thomas Corley for Business Insider : the rub is that wealthy people believe very strongly in self-determination, and assume they are responsible for their good fortune.  I think of it as the “I like the game when I’m winning” phenomenon, and like most deep beliefs, some kernel of truth is there.  We could spend considerable time just debating these difficulties, and my being married to a psychiatrist offers uncomfortable insight.  In any case, discussions surrounding this are ubiquitous, and my opinions, though somewhat unconventional, are straightforward.  Historically, collective stands are easier to make and less risky than those alone.  In semi-skilled and clerical trades, we called these collections “unions.”  Professional societies such as the AMA, the ASA, the IEEE, and so on, are the periwinkle-to-white collar approximations, with the important similarity that collectively asserting will just simply works better.  And yet, we in data science have little in the way of such a framework.  It’s worth understanding why.

Cosmic Demand Sans Trade Union

The skyrocketing demand for new data science and machine learning technology, together with a labor dogmatism peculiar to the United States have left us, so it would seem, without a specific trade union that is independent of corporations and responsible for governing trade ethics and articulating public policy initiatives.  Older technology trades have something approximating a union in the professional societies such as IEEE and the American Statistical Association; like the American Medical Association and the American Psychological Association, these agencies offer codes of ethical practices and publications detailing the latest comings and goings in government regulation, technology, and the like.  Certainly, the discussion occurs here and there, though Steve Lohr’s 2013 piece in the New York Times summarizing a panel discussion at Columbia hinted a common refrain in our trade:

[t]he privacy and surveillance
perils of Big Data came up only
in passing[...] during a
question-and-answer portion of
one panel, Ben Fried,
Google’s chief information
officer, expressed a misgiving[:]
“[m]y concern is that the
technology is way ahead of society[.]

That is, we all know we have a problem, but little is happening in the way of addressing it.  A smattering of public symposia have emerged on certain moral considerations around artificial intelligence, though much of what is easily unearthed is some older articulations by Ray Kurzweil, Vernor Vinge, and older still those by Isaac Asimov.  These often take the form of dystopian prognostications of robot intelligence, though I agree with Chomsky that we’re perhaps light years away from understanding even the basic elements of human cognition, and that replicating anything resembling that is not on the horizon.  Admittedly, my starry-eyed interest in Kurzweil’s projected singularity is what pulled me into computer science, but Emerson warns us that intellectual inflexibility belongs to small minds.  Fear-mongering of the future brings me to a spirit we ought exorcise early and often.

Unemployment and Automation : A New(ish) Bogeyman

No discussion of the impact of our technology would be complete without paying a little attention to the fevered musings and catastrophization of mass unemployment due to automation.  We as a society of technologists ought have a simple answer to this, namely that the post-industrial revolution mindset of compulsory employment as monetized by imagined market forces is illogical, inefficient, and unnecessarily dangerous to who we are and what we do.  Even less charitably, slavish genuflection to the free market mania is an obstacle, rather than a catalyst, to progress, as the complexities of civilization necessitate a more nuanced economic framework.  Though we’d need another article or so for better justification for the foregoing, I’ll skip to the conclusion to say that we must restore and strengthen public investment in technology democratically and transparently, casting off militarization and secrecy.  A good starting place is the realization that virtually all high tech began in the public sector, and that’s a model that serves both society and technologists.  It also organically nurtures trade consortia of the variety described above.  In any case, the principal existential threats we face have nothing to do with mass employment, though thwarting those threats, nuclear proliferation and catastrophic climate change, might require it.

Triage and Final Thoughts

Answering these current events demands responsible, courageous public discourse, appropriately supporting victims and formulating strategies to avert the totally preventable disasters above.  We should organize a professional society free of corporate, and initially governmental, interference, comprised of statisticians, analysts, machine learning scientists, data scientists, artificial intelligence scientists, and so on, so that we can internally by conference

  • collectively educate ourselves about the ramifications of our work, such as reading work by trade specialists such as O’Neil,
  • jointly draft position papers on requests for technical opinions by government and supranational organizations, such as a recent request from NIH,
  • dialog openly about corporate malfeasance,
  • draft articles scientifically explaining how best to regulate our work to safeguard  and empower the public (eloquently stated in Satya’s mission statement),
  • exchange ideas and broaden our trade perspective,
  • collectively sketch safe, sensible guidelines around implementations of pie-in-the-sky technology (such as self-driving cars), and
  • strategize how to redress public harm when it happens.

A few technologists, such as George Polisner, have very publicly taken stands against executive docility with respect to the Trump administration; his building of the social media platform civ.works is a great step in evangelizing elite activism, and, of course, privacy guarantees no data company will offer.  Admittedly, we all need not necessarily surrender positions in industry in order to address controversy, but we can and must talk to each other.  Talk to human beings affected by our work.  Talk to our neighbors.  Talk to our opponents.  The ugly legal and political fallout awaiting us is really just a hapless vanguard of the much more dangerous elite cynicism and complacency.  How do we ready ourselves for tomorrow’s challenges?  It begins with a dialog, today.

The Conservative Nanny State : A Book Review Part Four : Demonized Unions and Glorified Patents

Continuing our series analyzing Dean Baker’s The Conservative Nanny State, we’ll touch on a few key features quite effective in funneling wealth upward with no obvious systemic advantage : undercutting of collective bargaining and bestowal of monopoly status for intellectual property.  Baker argues, astutely, that neither of these features really make sense in a free market system, as collective bargaining is a market-based strategy for assuring at least a living wage for tradespersons vying for limited jobs, and government-conferred monopolies are illogical when producing, say a life-saving drug, is incredibly cheap.

Repeat After Me : Unions are Evil, Unions are Evil…

Baker touches briefly on elite hostility to organized labor for mid-to-lower income tradespersons, arguing that it’s an important feature of the conservative nanny state.  It’s certainly easy to see why, as trade unions, as we’ve discussed previously, generated most of the benefits we derive from employment, including paid holidays, vacation, healthcare, weekends off, and the like.  Yet the prevailing sentiment is often quite negative, as documented by Gallup since 1936.  Even in my own work experience have I witnessed the effects of this propaganda.  In working for the aforementioned defense contractor, I remember a strike executed by union members when the parent company chose to slash benefits.  Coworkers scoffed at and mocked the picketers, bemusing of the scabs and the internal contortions to cover the labor loss.  I heard internally that an upper level manager actually physically assaulted one of the picketers after a heated exchange.  The strike failed, the union workers sustained a more undesirable benefits package than had been offered previously, a remarkable victory for anti-unionists among the elites.

My own personal experiences in corporate America offer further revealing data regarding elite hostility toward unionization : both in working for corporate Uber and Amazon, I encountered many of the low wage employees (dubiously mislabeled as free contractors) among the drivers, cabbies in the case of Uber and delivery drivers in the case of Amazon.  I met probably seventy drivers while working for Uber, as the company would spring for free Uber rides home if I remained in the office past ten o’clock at night.  Though the drivers were understandably reticent to discuss with me, a corporate employee at the time, their opinions on Uber’s downward pressure on their wages, I generally could ease them into opening up after I shared the long labor history of America with them.  The picture was universally bleak : living, breathing people trying to survive sharp increases in the cost-of-living in San Francisco found themselves in a harsh, highly competitive trade with a quite hostile corporate sponsor.  Uber routinely would fire drivers with little or no warning, all based on a very arbitrary rating system with very little means of disputing a bogus negative rating.  Uber also sharply cut wages on these drivers.  The picture among Amazon drivers was very similar : no benefits and fast firings were the law of the jungle, true even in more liberal democracies such as the United Kingdom.  I informed virtually all of these drivers I met that the only proven means of driving wages upward is collective bargaining through unionization, something the drivers tell me Uber harshly demonizes; see The Verge for a discussion on Seattle’s efforts to protect Uber drivers.

America’s sordidly violent labor history features an unusually sharp hostility toward trade unions for semi-to-unskilled labor, as they are harmful to profits.  A rather salient piece to the puzzle is the National Labor Relations Act (or Wagner Act) of 1935, conferring the right of private sector employees to organize unions and participate in collective bargaining; the National Labor Relations Board received special attention during my Uber employee orientation, as one of the chief legal officers lambasted the committee as desperate bureaucrats hell-bent on squeezing money out of the innocent drivers.  In remarkably effective legalese rhetoric, she argued that the NLRB is out-of-touch and irrelevant in a world where Uber drivers can nab a fortune in driving, thus, it’s a charity to classify drivers as contractors.  Though she aptly described the experience some of the earlier “contractors” enjoyed, an unnervingly large fraction of latter-day drivers never managed to attain this golden driver’s seat.  Certainly, Uber represents something of a revolution in ride-sharing, but why not support one’s workforce?

Returning more to the historical context, the Taft-Hartley Act of 1947 outlawed secondary strikes, strikes instigated by workers of one trade expressed in solidarity with another trade’s ongoing strike.  You read that correctly : a painter’s union cannot legally strike in solidarity with carpenters participating in a union strike.  Though there is much to discuss on the topic of organized labor (and we’ll touch briefly on a few of Baker’s further points momentarily), suffice it to say the corporate nanny state mythology somehow manages to convince highly-compensated workers that not only is labor solidarity unnecessary (the market argument), but that they themselves derive no protectionism from said nanny state or any other well-to-do analog of the trade union, the former of which is a remarkable feat of propaganda, the latter of which Baker quite powerfully decimates as we discussed earlier.

Patent Trolls and Copyright Cows : The Geese Laying Golden Eggs

Baker turns attention to two extremely powerful, state granted protections for individuals and corporations : patents and copyrights.  Again, conservative nanny state apologists might consider these instruments to be laws of nature, naturally forming optimal strategies in the fantasy land of free markets.  By contrast, Baker aptly describes them correctly as “government-granted monopol[ies].”  That is, an agency, be it individual, government, non-profit, or corporation, can apply for patent or copyright protection on an invention, idea, artistic expression, and so on, ensuring that agency time-limited monopolistic control over usage and sales.  The argument in favor of these anti-market practices is that they encourage innovation and creativity, generally socially positive notions.  In fact, the power derives directly from the U.S. Constitution : under Article I, Section 8, we have that Congress has the power

[t]o promote the Progress of Science and
useful Arts, by securing for limited 
Times to Authors and Inventors the 
exclusive Right to their respective 
Writings and Discoveries.

This power owes to the guild and apprentice system from the Middle Ages, Baker explains, as a means of increasing innovation and scientific discovery.  Yet, are these the most optimal means of doing so?  Certainly, executives of Merck, Pfizer, Apple, Google, Amazon, and a lengthy list of other companies are quite wealthy.  But do these state-guaranteed monopolies efficiently generate innovation?  My own background includes an understanding of the evolution of software development, and the open source standard (free and open to the public) has grown tremendously in popularity in recent years.  Well-known to software developers is the superior reliability in Unix-based operating systems relative to that of proprietary models.  It’s reasonably understood history that the biggest software firms in large part owe their success to IBM’s PC open architecture strategy, suggesting an open OS standard could have created a proliferation of competitive products in both basic kernel (OS) space operations and those in the user space.  Though we have many advances now in personal computing, much of the game-changing advancement has occurred either in the state sector (discussed in previous posts) or in highly competitive, less monopolistic settings.

Baker describes an interesting economic parallel : dead-weight loss is the difference between patent-protected and market-based prices, though he scoffs that his fellow economists find no fault with this loss with respect to pharmaceutical prices, despite their hostility toward the same loss incurred in tariffs.  Technical economics aside, Baker poses the critical question : are patents and copyrights the most optimal instruments of their kind for encouraging and rewarding innovation?

To answer the question, Baker points to a highly controversial beneficiary of the patent system : the drug research lobby.  If we are to believe conservative nanny state apologists, he argues, the patent system should be the most capable protection in assuring innovation in medical advances and lifesaving technology.  Patents account for a factor four multiplier in drug costs, meaning if a generic costs one dollar, the corresponding brand-name drug costs four dollars, according to the final Statistical Abstract of the United States, the 2012 edition.  (We could discuss the highly politicized, stupid decision to discontinue this long running report published by the U.S. Census Bureau, but we’ll defer for now.)  As of the publishing date of the book, the factor was three, meaning the divide has grown by thirty-three percent.  Pharmaceutical companies offer exactly the argument as described above, despite large fractions of profits wasted on marketing and executive salaries.  Overall, Baker reports $220 billion in drug sales in 2004, confirmed by the aforementioned report.  By 2010, this number grew to nearly $270 billion.

Because patent protection ensures higher drug prices than could otherwise be paid, literally millions of Americans each year skip medications to save money.  Harvard Health Publications reported in 2015 cites a survey by researchers Robin Cohen and Maria Villarroel that eight percent of all Americans fail to take medications as directed because of lack of money.  As expected, older and less well-insured Americans missed dosages in higher numbers, but astonishingly, six percent of Americans with private insurance skimped on their medications.  That is to say, the private insurance system, adored by conservative nanny state apologists, forces Americans further into poverty and costs too much.  A report in 2012 by The Huffington Post indicates that these pharmaceutical companies spend nineteen times as much on marketing as they do on research, suggesting that the huge windfall of patent protection isn’t really going to good use.

Baker points to an even more serious consequence of artificially ballooning prices : black market drugs.  A strategy comparable to “medical tourism,” discussed earlier, leads Americans to order potentially dangerous drugs from foreign countries.  This steady flow of both illegally and legally obtained medicines is completely expected under a system in which these millions of Americans self-report failing to take drugs for lack of money, a failure of the patent system.

Perhaps most damning is Baker’s argument with regard to copycat drugs, or drugs designed to mimic the behavior of a patented, available drug.  Pharmaceutical companies have discovered that hitching themselves onto bandwagons of popular, patent-protected drugs of high import (such as allergy, diarrhea, and heartburn medications) is extremely lucrative.  That is, rather than invest money and energy on new lifesaving drugs and technologies, they try to replicate something in the mainstream by tweaking a few formulas.  As of 2004, two-thirds of all newly approved drugs in America were copycats, according to the Food and Drug Administration.  That leads to a startling number with regard to where the research money goes : sixty percent of research dollars goes to such wasteful creations.  So sixty percent of medical dollars, private and public, do not promote innovation at all, because of the patent system.  Other inefficiencies of said system appear in a 2015 report by BBC : for instance, many drug companies employee “floors of lawyers” to fight in court for patent extensions, a strategy interestingly called evergreening.  Dr. Marcia Angell, former editor for The England Journal of Medicine, discussed in The Canadian Medical Association Journal drug companies copying their own drugs for patent extensions, an example being Nexium and Prilosec developed by AstraZeneca : the drug company hiked the price on the outgoing to migrate patients onto the incoming, hoping to retain marketshare once the patent expired on the outgoing.

The aforementioned pair of drugs are examples of enantiomers, or drug molecules equivalent in structure and form, one a mirror image of the other.  These arise naturally in the course of development, often with very similar physiological interactions; thus, the practice of patenting both separately is rather suspect.  In “Enantiomer Patents: Innovative or Obvious?” appearing in the Pharmaceutical Law & Industry Report, Brian Sodikoff, et al. discusses the legal standards in doing so, suggesting the patent system overly caters to the corporations.  A few other examples of double-dipping are Lexapro and Celexa, and Ritalin and Focalin.

It turns out that drug companies leverage several tricks in the spirit of the foregoing to stretch the lifetimes of patents, including

  • rebranding mixtures of existing drugs, such as Prozac and Zyprexa to obtain Symbiax,
  • morphing generic drugs into new drugs by adjusting dosages, such as Doxepin into Silenor,
  • repackaging an existing drug as is for a new purpose, such as Wellbutrin and Zyban, and Prozac and Sarafem,
  • creation of extended release variants of existing drugs by established mechanisms, such as Ambien and Ambien CR, and Wellbutrin and Wellbutrin XL,
  • changes of delivery mechanisms, such as Ritalin as a pill and Daytrana as a topical patch,

among others.  In each of these cases, big pharma manages to hike the price substantially, even when cheaper generics are available with adjustable dosages.  These corporations argue they should receive full patent protection as though they devoted the same amount of resources for researching the copycat as they did for developing a brand-new therapy from scratch, a preposterous claim. What’s worse, drug reps, or prettified agents armed with high discretionary credit routinely accost physicians, offering expensive samples and lavish luncheons for free; NPR reported earlier this year that the drug rep interaction significantly increases the number of costly prescriptions written by doctors.  Though we could discuss these inefficiencies and contradictions more, we’ll leave it at that.

By the previous arguments, we certainly can begin to believe that patents and copyrights probably aren’t the most efficient means of promoting innovation, as Baker correctly asserts.  So how does one promote innovation?  Baker suggests raising government investment in research, establishing a grant and prize system aimed at spurring innovation.  Researchers would strive toward successful development of lifesaving medical technology, competing jointly for grants to fund their work.  Upon successful innovation, they could receive prize money commensurate with the societal benefit.  Upon acceptance and approval, their contributions would become public domain, so drug manufacturers could compete on the open market for the cheapest way to produce the drugs, much like application developers could leverage IBM’s open architecture.  As Baker observes, this isn’t the only approach, but it certainly is worth trying, considering the current system is so remarkably wasteful.  Since the government confers the patents and copyrights for the public good, the government could ostensibly leverage other instruments to promote “the Progress of Science and Art.”

Next time, we’ll consider Baker’s arguments on bankruptcy, torts, and takes.

Shyam Kirti Gupta and Shyam Kelly Gupta contributed to this article.