The age we live in, which is of course the Information Age, has presented us with great boons but also many problems. As an old I.T. grunt—I wrote my first program in 1969 using the ALGOL language, now defunct—I habitually think of Information Age issues in a binary way, as pertaining either to data or to code.

In social and political commentary it is data that gets most of the attention. What is the point of balance between privacy and national security in the collection of data? When a private corporation (Google, say) gathers data about me (from my internet searches, perhaps), what may they lawfully do with it? If a U.S. company stores data on servers abroad, can they be compelled to repatriate it? (That is the gravamen of Microsoft Corp. v. United States, currently being litigated.)

Mathematician Cathy O’Neil’s new book offers a welcome change of viewpoint, from data to code, that is, to the computer programs—she prefers “algorithms”—that analyze the great floods of data now washing over the world. These algorithms are the “weapons of math destruction” in her book’s title.

Once they’ve analyzed the data, algorithms suggest decisions. The impact of those decisions may be trivial: which ads should appear on my Facebook page, perhaps. It may be middling: whether or not I should be given a loan. It may be life-changing: the sentence a convicted criminal should receive. It may even be historic: how many people in demographic X can be persuaded to vote for candidate A. Far too often, O’Neil tells us, these algorithms deliver unfairness, especially to poor and disadvantaged citizens.

* * *

She tackles her subject as a Social Justice Warrior, a description to which I don’t think she would object. O’Neil holds conventional progressive opinions, and is active in the Occupy movement. This book’s dedication is “to all the underdogs.” She refers to illegal aliens as “undocumented migrants.” (But then, in her chapter on work-scheduling algorithms, frowns that “[t]he trouble, from the employees’ perspective, is an oversupply of low-wage labor.”) She responded to Donald Trump’s election victory with a sort of defiant incomprehension, telling readers of her blog, mathbabe.org, that “we are all activists now.”

To a reader not of that parish, these inclinations give her book a rather peevish quality, a tone of relentless negativity. They also lead her into sins of both omission and commission: key facts left unstated, stock left-liberal fables repeated uncritically. It doesn’t help that, as is quite normal nowadays, the book is not edited nor even spell-checked. There is no such place as “the British city of Kent”; there is no such word as “miniscule”; etc. All of which is a pity, because there are important issues to discuss here and O’Neil is very well placed to discuss them. She has years of experience as a quantitative analyst working with algorithms in the private sector, coding them up for financial and commercial firms. In Weapons of Math Destruction she casts her net wide, offering examples from education, law enforcement, employment, insurance, and politics.

* * *

I found chapter two, “shell shocked: My Journey of Disillusionment,” particularly engaging. In 2007 O’Neil left academia to work at a big hedge fund, searching out and capitalizing on market inefficiencies. This gave her a grandstand seat at the great recession of 2008. Disillusioned, she then moved to commercial work, tracking the habits of online shoppers, designing, for example, “an algorithm that would distinguish window shoppers from buyers.” This broadened her awareness of the scope and power of algorithms in many walks of life.

I wondered what the analogue to the [2008] credit crisis might be in Big Data. Instead of a bust, I saw a growing dystopia, with inequality rising. The algorithms would make sure that those deemed losers would remain that way. A lucky minority would gain ever more control over the data economy, raking in outrageous fortunes…. I could barely keep up with all the ways I was hearing of people being manipulated, controlled, and intimidated by algorithms.

As it happens, I myself spent the years 1985-2001 working for a big Wall Street trading firm, mainly in Credit and Risk Management, with a concentration on mortgage-backed securities and their derivatives—the sparks that started the 2008 prairie fire. My recollections of that environment are at odds with some of O’Neil’s. I don’t, for example, recall the level of cynicism she claims to have seen.

[T]he figures in my models at the hedge fund stood for something. They were people’s retirement funds and mortgages…. For hedge funds, the smuggest of the players on Wall Street, this was “dumb money.”

I never heard that expression. The Wall Streeters I knew uniformly believed that they were doing socially useful work, giving companies and individuals access to finance that would not otherwise be available to them. Nor did I see that, as she writes, “[t]he refusal to acknowledge risk runs deep in finance.” To the contrary, my own directors were obsessed with risk. That’s why they had staff working on Risk Management. It’s true that the culture at hedge funds differs somewhat from that at older firms, but surely not that much.

* * *

Much more serious, O’neil fails to acknowledge the political origins of the 2008 crisis. “After the recession that followed the terrorist attacks in 2001…[a]nyone, it seemed, could get a mortgage.”

 It would have been more accurate to write: “After President George W. Bush’s October 2002 speech on minority home ownership….” Bush was after the Hispanic vote and he calculated that homeowners were more likely than renters to vote Republican. Nor was Bush striking out into new political territory there. Efforts to increase minority home ownership went all the way back to the Carter Administration’s Community Reinvestment Act of 1977.

How do you increase minority home ownership? If you are a government armed with regulatory powers, one way is to browbeat mortgage lenders into relaxing credit standards. The federal government had been doing this for 30 years prior to the 2008 crash.

The abandonment of traditional credit standards for political ends was not the only cause of the 2008 crash, but it was a major contributing factor—a much bigger one, I’m sure, than faulty algorithms in the back offices of trading firms. A social justice warrior’s account of these events really should include the fact that behind the crash lay 30 years of moon-booted efforts by the federal government to advance…social justice.

“In the run-up to the housing collapse, mortgage banks were not only offering unsustainable deals but actively prospecting for victims in poor and minority neighborhoods,” writes O’Neil. Well, yes, that’s what the government wanted them to do! …And then, after the crash, sued them for having done. With the feds, you can’t win.

* * *

Programs for social justice, including O’Neil’s, rather frequently display this damned-if-you-do, damned-if-you-don’t, aspect. Here was Jesse Jackson complaining in 2012 about under-policing in poor Chicago neighborhoods: “More police have been dispatched to neighborhoods where the murders have spiked, but citizens there still aren’t protected as well as our…uptown businesses are.”

On the other hand, O’Neil grumbles that policing algorithms like New York’s CompStat and Los Angeles’s PredPol send too many cops into poor, crime-prone neighborhoods. She writes:

How about crimes far removed from the boxes on the PredPol maps, the ones carried out by the rich? …We have every reason to believe that more such crimes [i.e., like those that led to the 2008 crash] are occurring in finance right now…. Just imagine if police enforced their zero-tolerance strategy in finance.

How soon they forget! I refer Ms. O’Neil to Daniel Fischel’s 1995 book, Payback: The Conspiracy to Destroy Michael Milken and His Financial Revolution, about the vengefully politicized arrests and prosecutions on Wall Street in the 1980s, when traders were led away from their desks in handcuffs.

The author herself notes a paradox imbedded in the case she is making. Where so many decisions are being made today by, or with the aid of, algorithms, those decisions were, she writes, formerly made by human beings, their minds “occupied with human distortions—desires, prejudice, distrust of outsiders.” Don’t computerized algorithms remove those distortions?

O’Neil doesn’t really manage to square this circle. After many pages of deploring those desires and prejudices and that distrust—in hiring, for example, and police work—she tells us that our algorithms “urgently require the context, common sense, and fairness that only humans can provide.” Uh….

Much of her critique, in fact, amounts to little more than telling us that our algorithms are not very good. Probably they are not; but then, human judgment is often not very good, either, as she keeps reminding us…except when calling for more human judgment in our decision-making!

* * *

There is another paradox in O’Neil’s case that she seems not to notice. She inveighs against the opacity of too many algorithms, the impossibility of knowing how they arrive at their results. Elsewhere, however, she complains that algorithms can be too easily gamed from knowledge of their workings—by colleges seeking to improve their rank on the U.S. News & World Report listings, for example, or by job applicants faced with résumé-reading algorithms. “[I]n a digital universe touted to be fair, scientific, and democratic, the insiders find a way to gain a crucial edge.” Wouldn’t more opacity take care of that?

A book of this sort must end with a prescription. What, in the author’s opinion, is to be done about the unfairness and inequality generated by our algorithms? The core problem, she tells us in her concluding chapter, is that algorithms have the wrong values built into them—old values.

Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. We have to explicitly embed better values into our algorithms, creating Big Data models that follow our ethical lead. Sometimes that will mean putting fairness ahead of profit.

Profit—ugh!

The government, of course, has a powerful regulatory role to play.

Uh-oh. So the prescription is that algorithms should be built around “better values”—that is to say, the values of social justice warriors like Cathy O’Neil—and that government regulators should enforce this. Then unfairness and inequality will be eliminated from the outcomes, just as it was when government forced mortgage lenders to scrap those fusty, discriminatory, old credit standards for mortgage lending. What could possibly go wrong?

* * *

O’Neil’s naïve progressivism, her blithe ignorance of what the road to Hell is paved with, mar what might have been a useful book. The Information Age is built of code as well as of data, and the code needs more attention.

Certainly the underdogs of her dedication deserve a hearing. Who, exactly, are the underdogs, though? Criminals, or the noncriminal residents of poor neighborhoods on whom the criminals prey? Welfare single mothers, or the gratification-deferring middle class whose taxes support them? “Undocumented migrants” or the low-skill Americans whose wages they depress? (Mathematicians are not much affected.) Poor-credit minorities seeking to buy a house, or the small investors whose savings were wiped out when rational credit standards were declared unlawful on minorities’ behalf?

Perhaps last November’s election results offer some clues as to what, in the opinions of several tens of millions of American voters, are the answers to questions like these.