Developers who turn to AI suffer from “imposter syndrome”

AI is “cool and interesting” for engineers and developers but those who try to learn how to build AI can become frustrated and suffer from imposter syndrome according to new research from Google.

Developers and non-technical people want to learn technical AI skills because machine learning is intellectually fascinating and is “the future.” But machine learning also involves a lot complex mathematics and people stall out in their learning journey because common AI tools do not support a conceptual understanding of how key AI principles work.

The research points to a mismatch in expectations versus reality – when it comes to learning how to build AI, people are hindered by a lack of bridging between mathematical theory and code. There are big gaps in the kind of resources required to translate esoteric math jargon into practical concepts. There’s also an expectation gap. With reductions in both installation and programming overhead, it’s easier to get started on simple examples but difficult to go beyond initial steps.

Developers who use traditional tools are used to having access to real-time hints and easily-configurable toolsets. In machine learning these interfaces and tools do not exist, leaving developers often confused and unable to troubleshoot. Tasks that may be relatively straightforward – at least conceptually – for an experienced data scientist can stone wall an engineer. There are few tools to scaffold someone who gets stuck diagnosing why a model won’t converge or needs to choose an algorithm or find a good dataset.

The rise of pre-made machine learning models means that people expect more “out-of-the-box” models. While it might be relatively easy to fire up a pre-made model, there are few best practice guidelines that tell people how to adapt them to example problems, let alone someone’s real-world one. What people need are tools that fill in the gaps, such as:

  • analyzing data sets and suggesting plausible models,
  • diagnostic checks for debugging,
  • suggestions of classes of models for specific problems,
  • automatic diagnostic checks on model performance,
  • real-time hints for faster convergence.

What’s required, say the researchers, are resources that create an intermediate layer of scaffolding that can synthesize theory into practical concepts. While the researchers don’t explicitly say it, if someone has a mental model of “plug and play,” they are sorely disappointed by what they find in the world of ML.

The research revealed an interesting paradox – while people think that AI will reduce the need for human work, the actual mechanics of building AI is very labor-intensive. Developers report being surprised by how many decisions need to be made beyond what they are used to in regular programming; which model architectures to use, pre-processing of raw data, hyperparameter tuning.

Strikingly, the research points to the value of “folk wisdom,” trial and error, intuition and the value of having an experimental mindset. As more traditional developers are attracted to AI for their career and personal development, they (and those they work for) will need to value an experimental approach and find ways to adapt their software development processes accordingly. If this isn’t done then AI will be either too easily abandoned or, alternatively, put in production too soon and without appropriate performance testing.

There’s also a double-edged sword in making AI easy for all – while pre-made models and data pre-processing utilities can help everyone get going, convenience may hinder people’s learning of the underlying concepts, the understanding of which is vital for the development of human-centered AI.

Photo by Sebastian Herrmann on Unsplash

Share on email
Share on facebook
Share on linkedin
Share on twitter