AI-readiness is an imperative at the New York Times

As AI diffuses through the global economy, companies are looking to make their workforces “AI ready.” Digital transformation, big data, analytics and cloud all enable new services and faster innovation. For those who deal in data all day, every day—the “data-natives”—this transformation is a way of life. But for others—the “data-curious”—it can be daunting to keep up. Still others—the “data-deniers”—find it a challenge to see where they fit in.

Our answer is that everyone has a role in an AI-first company. The real challenge is to get everyone to a common baseline—where everyone in the company understands the power, reach, promise and perils of modern AI and is able to contribute to innovation and operational practices with an eye to the AI workflow.

When The New York Times approached us to host a session on AI readiness for the ad team we knew we were going into an organization that had sophisticated AI already at work. The ad innovation team at The Times had spent two years developing audience models that could then be offered to advertisers as contextual targeting tools. This included using panel-based data to construct an algorithm that scores all The New York Times articles against 18 different emotions such as “curious” or “optimistic.” The technology also predicts how likely an article is to motivate a reader to take a particular action such as making a charitable donation, embarking on a dietary change or spending a significant amount of money.

But building an AI-ready workforce involves much more than having strong data science teams, abundant data and an AI-ready technology platform. True AI-readiness means having employees at all levels and in all types of roles who understand how machines learn and can spot opportunities to craft new workflows, products and services that use the best of humans and machines, including begin able to intervene when it’s going wrong.

We started by asked a simple question: what is AI? We love this question because it instantly reveals people’s perceptions. The Terminator. Autonomous, weaponized drones. Amazon’s recommendation algorithm. Robot pets. Chatbots. Google’s search algorithm. Apple Maps. AI is used for both good and bad, it’s ubiquitous, incredibly useful and it’s not always right. It’s an every-day thing.

Next we asked the team: what worries you about AI? People were well informed about AI risks. Clearly they follow the headlines! Amazon’s hiring algorithm abandoned due to bias against women. COMPAS’ recidivism algorithm under fire because it’s biased against blacks. Facebook’s discriminatory housing ads. The list goes on.

But then the big question: what can be done?

In most organizations, fixing machine bias is left to the technologists. That is, if it’s done at all. Our approach is different. The best fix for AI bias is more holistic; a diverse team, operating a robust process which includes both technical and non-technical fixes, tackling design and operational issues such as key aspects of UX design (say, adding prompts that help users understand correlations between so called “neutral variables” and protected classes), important tradeoffs (such as the tradeoff between fairness and accuracy when different user groups have different base rates) and appropriate remedies and controls when things go wrong (who is the “human-in-the-loop?).

At The Times, we found a thirst for understanding these issues—on behalf of readers, advertisers and for staff. But we also saw something deeper—a level of individual responsibility to take on the challenge of understanding machine bias. Machine bias, as with human bias, can distort truth and interfere with our progress towards a more just society. Those who communicate with society now need to have a working knowledge of AI bias as well as the confidence and authority to tackle it.

Nothing could be more true to the The New York Times brand.

Share on email
Share on facebook
Share on linkedin
Share on twitter