Six books Silicon Valley leaders should read before it’s too late

It feels like the start of a tech backlash. Or at least, a mid-course correction. A backlash against design that manipulates our predictable cognitive weaknesses, disrupts our attention spans and creates new forms of psychological suffering. A backlash against the constant gaming of our mental models of technology. A backlash against predictive algorithms that infer our behaviors, nudging us towards a goal set by the technology provider. A backlash against algorithms and data that perpetuate the worst of human bias and discriminate, sometimes without people even realizing. And a backlash against our eroding privacy, dismissed as something Americans no longer care about. This couldn’t be further from the truth, we’re just yet to realize how valuable it is and to how much we’ve already given up.

There are now tangible signs that a power shift is underway—proposals for federal privacy regulation, antitrust investigations, absurd “tech” company valuations justified by bankers and charismatic founders being called to account, facial recognition surveillance bans, and simply individuals questioning whether a technology solution is always the right answer.

Many experts, scholars and commentators have been talking about these issues for years. With a sound understanding of principles in humanities, law, economics, design and psychology, a lot of this was predictable, or, at least, unsurprising.

With sentiment shifting, tech leaders need to prepare for broader scrutiny, different expectations and a more nuanced approach to building trust in their company’s services. Here are six books that we think stand out as authoritative and thought provoking in their scholarship yet approachable and highly readable. They all offer tangible solutions and strategies for navigating the emerging intersection between humans and machines in a modern digital economy.

Read these six books to understand why the backlash is happening and what you should do about it.

Biased: Uncovering the Hidden Prejudice That Shapes What We See, Think, and Do

Jennifer L. Eberhardt, PhD. Professor of psychology at Stanford University. Published 2019

Why it matters: Unconscious bias can be at work without us even realizing. At the age of three months, babies react more strongly to faces of their own race than to faces of people unlike them. As controversy increases over the use of facial recognition technologies and bias in AI facial data sources, understanding how to tackle human bias will be vital to understanding how to tackle AI bias.

The Technology Trap: Capital, Labor, and Power in the Age of Automation

Carl Benedikt Frey, Oxford Martin Citi Fellow, University of Oxford. Published 2019.

Why it matters: Frey and Osborne’s 2013 paper on the future of employment was instrumental in starting the discussion about automation and work. Six years on, the sound bite taken from this work still shapes people’s opinions of a robot apocalypse—47% of US jobs are at risk. What if it isn’t the promise of future prosperity that matters—it’s how the short term is managed? If technology is no longer seen as a positive force, people will drive a very different regulatory and social agenda than tech leaders envisage.

The Efficiency Paradox: What Big Data Can’t Do

Edward Tenner, Smithsonian’s Lemelson Center for the Study of Invention and Innovation. Published 2018.

Why it matters: Big data and AI are making us more efficient, at an every increasing rate. This is all good if we are heading in the right direction. But what if we’re not? Relying solely on AI means we miss the obvious benefits that evolution has given us: intuition and the ability to learn from the random and the unexpected.

Privacy’s Blueprint: The Battle to Control the Design of New Technologies

Woodrow Hartzog, Professor of Law and Computer Science at Northeastern University School of Law and College of Computer and Information Science. Published 2018.

Why it matters: Rather than use modern technology, what people are really doing is responding to the signals and options that technology gives them. The technology industry encourages designers to abuse their expertise and inside knowledge by presenting industry’s preferred outcomes as ones demanded by the technology. For years, tech leaders have tried to convince people that technology is neutral, but this is wrong when AI learns from data that it collects via these designs, then makes autonomous decisions about us. Design is powerful and political by default.

Sensemaking: The Power of Humanities in the Age of the Algorithm

Christian Madsbjerg, founder of ReD Associates. Published 2017.

Why it matters: Humans are subservient to algorithms and big data, but these data do not accurately reflect our human experiences. Relying solely on AI and big data leaves companies at risk of losing touch with the humanity of their customers. Deep engagement—rather than deep learning—that draws from culture, language and history needs to run in tandem with data-driven decisions or companies risk masking huge deficiencies with AI.

Virtual Competition: The Promise and Perils of the Algorithm-driven Economy

Ariel Ezrachi, Slaughter and May Professor of Competition Law at the University of Oxford, and Maurice E. Stucke, Professor of Law at the University of Tennessee. Published 2016

Why it matters: AI challenges competition now that algorithms continuously manipulate markets. There are gaps in current consumer law and society may need new definitions of harm in the digital age. And it’s not your imagination, AI shifts power into the hands of a few, concentrating power and creating inequality.


 Note: As an Amazon Associate, we earn from qualifying purchases

Share on email
Share on facebook
Share on linkedin
Share on twitter