AUTOMACENE RISING: The Perilous Political Economy of Artificial Intelligence

Greg Daneke, Emeritus Prof.

--

People worry that computers will get too smart and will take over the world. But the real problem is that they are too stupid and they’ve already taken over the world ____ Pedro Domingos, The Master Algorithm

Computers are like humans, they can do everything except think ____ John von Neuman

The development of computer programs that attempt to replicate human intelligence have exploded in recent years due to huge stores of behavioral data extracted and processed by the mammoth monopoly platforms (e.g. Google, Facebook, etc. as well as their Chinese counterparts). Fears of AI (artificial intelligence) abound, however, the science fiction version where computers become sentient, and realize we are planetary parasites, deserving of extinction, are probably more than a bit far-fetched (except for the parasite part). The real problems of AI are much more subtle, complex, and manifold. They include at least the following:

1. Artificial General Intelligence (AGI), where iterative computer processes match (and surpass) human reasoning, remains as unreachable as it was several decades ago. Nevertheless, simple machine learning algorithms are replacing human judgement in a number of critical domains;

2. As the pace of displacement quickens, we are facing employment redundancies at an unprecedented scale, with significant portions of the population made completely irrelevant;

3. Further stratifications of society into immobile castes will result from ever-increasing application of algorithms in various gate-keeping domains (education, finance, criminal justice, etc.); and,

4. Institutionalized dehumanization will extend via the over-reliance of machine model of rationality in pseudoscience fields (economics, psychology, administration, etc.) and further undermine human freedom and dignity.

A Different Sort of McCarthyism

The quest for artificial intelligence began long before the term pricked the popular imagination and made significant strides via enhanced computational devices during WII (ala: Alan Turing). In its early days as an academic pursuit it was broadly interdisciplinary, including major biological and cognitive components (e.g. neurosciences), in order to better understand actual human processing. However, at a key juncture it was overtaken by the machine model of intelligence via the influential work of computer scientists, especially Stanford’s John McCarthy and his manifesto, Programs in Common Sense. This influence has been amplified over the years by various stunts, where computer programs defeat human champions in games such as chess, Jeopardy, Go, and Texas Hold’em. Much of what we currently view as AI is actually the narrow branch known as machine learning (ML), particularly of the so-called “deep learning” variety. Deep in this case, merely refers to multiple levels of interacting networks rather than profundity, necessarily. Moreover, these back-propagated “neural networks” also have only metaphorical similarity to workings of actual human neurons.

Despite this narrowing of focus, the aspiration and over-promising of AGI precedes apace. Yet, like the real horizon it continues to recede as we approach it. Recently, in the pages of the MIT Technology Review, Will Douglas Heavens, pointed out that,

Back at the dawn of AI, artificial general intelligence was conceived as the ability to “make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” Half a century on, we’re still nowhere near making an AI with the multitasking abilities of a human — or even an insect.

We are, however, making machines 1000s of times cleverer at computational tasks, and accelerating the replacement of human judgements. Furthermore, ML devices often produce heretofore indetectable associations, and actually forging a new approach to scientific inquiry. Inductive and deductive model testing for strong and replicable causality may soon be competing with the convoluted “abduction” of conjunctive signal patterns from noisy data. A modelist model, as it were.

Algorithm and Blues

Algorithms (the set of rules by which computers approach a particular problem) are now omnipresent in all forms of human endeavor, and most of these lines of code are protected as proprietary trade secrets. Moreover, problem of evaluating presuppositions is compounding by ML devices that often rely upon a labyrinth of self-discovered rules. Most of us are now having our lives “red-lined” (before we have even had a chance to live them) by these invisible machine judgements. Individual abilities and aspirations can be wiped-away by what I call cohort corralling. The results of tiny demographic miscalculations and spurious associational discoveries can become inexorable. Who gets into which schools and who get which loan and what interest rate are now the domain of incomprehensible algorithms. Wait it gets worse, who goes to jail and for how long, are often determined by machines whose reasoning and built in biases are completely opaque.

The power of predictive algorithms is especially suspect. Researchers at Dartmouth recently demonstrated that novice students could out predict the infamous COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm. This trade-marked tool has been used for some time by several states to project recidivism and make sentencing and parole decisions. Another prodigious study headed up by Princeton sociologists and information scientists involved 4,000 families (over 15 years) and nearly 13,000 data points, under the title, Fragile Families and Childhood Wellbeing. It concluded, in the words of one AI expert, that “AI can’t predict how a child’s life will turn out even with a ton of data”. Perhaps AI will prove better at persecution than prediction. Beyond illustrating the amplification of racial, gender, and class biases, Virginia Eubanks of SUNY Albany documented how algorithmically assisted judgements can further trample the down-trodden via a process she calls “the digital poorhouse”.

In more visible realms, AI will continue to wreak havoc on already decimated labor markets. It will accelerate the dispatchment of significant portions of the dwindling middle class. Moreover, after sending us down a rung or two, it will probably keep us there, via the gatekeeping processes alluded to above. Robots will continue to make inroads in manufacturing to the tune of another 20 million US jobs lost in the next few years. However, less obvious, but more devastating, algorithmic penetration will be in white collar fields such as law, finance, and general administration. Even the rarified ranks of coders will be displaced by algorithms that re-write themselves. The already burgeoning “gig economy”, with its the partially or precariously employed (without benefits), will swell. Worse yet, with so many rungs of the ladder removed, some will fall off completely. Yuval Harari suggests in his recent tome, Homo Dues: A Brief History of Tomorrow that we will soon be faced with a growing “useless class” (virtually unemployable and/or immobile).

Machina-Economicus and Surveillance Capitalism

Any hope of diverting these dystopian futures may lie with dramatically new economic understandings. Unfortunately, our ideologically befuddled (e.g. faux libertarian) economics profession, with its hopelessly outdated tools and concepts is on the verge of a significant retrenchment through the application of certain aspects of AI. The prevailing schools of thought in economics (neoclassical and/or neoliberal) are riddled with misguided notions, but one of the most apparent is that of “homo-economicus” (the all-knowing, utility maximizing individual). It is a neatly stamped “cookie man” with all the inconvenient dough of social reality tossed away. AI is about to reify him as a Frankenstein cookie monster. His arrival was heralded a few years back by distinguished scholars, Parkes & Wellman in the pages of Science,

The field of artificial intelligence (AI) strives to build rational agents capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo-economicus, the mythical perfectly rational agent of neoclassical economics.

I don’t think machina-economicus is what I and my Michigan colleagues had in mind when we were calling for “an artificial reality check for economists” back in the late 1970s. Our applications of “complexity theory” via simulations involving heterogeneous agents with mixed motives (e.g. reciprocity) suggested a radically different model for economics, which I call Homo-complexicus. For a more detailed discussion note my piece in the Real-World Economics Review. For now, suffice it to say that viewing the economy as a complex-adaptive system would expose the more egregious ideological errors of our current system, and perhaps even reduce our dependency upon certain neofeudal machinations, such as Ponzi finance regimes as well as “socialism for rich and free enterprise for the poor”.

To move in the direction of these new economic understanding, however, also requires complexity theory driven advancement on the political side of our political economy. Despite their deep entrenchment in the halls of power, mainstream economists tend to enforce a fake neutrality when it comes to politics. It was a large blob of the societal dough they pretended to toss away. Furthermore, AI would not only aid in the charade, it provides powerful tools for autocratic forces to overwhelm democratic institutions. We have already witnessed how bad actors, armed with mountains of re-reprocessed data regarding our most intimate hope and fears, can manipulate the electorate. Dirk Helbing and an auguste team (including Bruno Frey) recently speculated about the fate of democracy in the era of AI/Big Data in the pages of Scientific American, and they were not optimistic.

Once autocratic regimes gain a foot-hold they have a whole new box of tools with which to tighten their grip. Harvard Business School Professor, Shoshana Zuboff, has labeled our current epoch The Age of Surveillance Capitalism with the subtitle: “the fight for a human future at the new frontier of power”. Her lively 700+ page opus documents the vast monetization of what was once called “behavioral exhaust”. More importantly, she explains how the information monopolies justify the theft and distortion of our very essence by resurrecting the discredited ideas of B. F. Skinner. Skinner believed that his methods of control proved that human “freedom and dignity” are dysfunctional myths, like democracy. If the cancerous systems of finance capitalism were not bad enough, surveillance capitalism offers a much more putrid kettle of fish. If it is not already obvious to you, I should point out that capitalism (especially in its latest versions) and democracy are not nearly as compatible as most Americans seem to believe. The problems are too myriad to address here, suffice it to say it was a shotgun marriage in the best of times. And, some members of the AI industry seem increasingly willing to entertain divorce. Many among elite circles are clandestinely envious of the combination of capitalism and totalitarianism in modern China. Furthermore, to paraphrase Country Joe and the Fish (of Woodstock fame), there is plenty good money to be made by supplying the surveillance state with its tools of the trade. While perhaps not as blatantly oppressive yet as the Chinese “social credit system” (where one cannot even get on a local bus without a certain score), when you combine our own massive data gathering and manipulation capabilities with facial recognition and algorithmically enhanced profiling systems, you indeed have a fairly Orwellian specter, to say the least.

--

--

Greg Daneke, Emeritus Prof.

Top Economics Writer. Gov. service, corp consulting, & faculty posts (e.g., Mich., Stanford, British Columbia). Piles of scholarly pubs & accasional diatribes.