There’s a huge difference between the modest aims of the artificial intelligence (AI) and machine learning being used today, and the grand ideas of creating an artificial general intelligence that could match — and then rapidly exceed – the capabilities of a human mind
As they develop, AI and machine learning will be able to take on even more complicated tasks, but it could still be half a century or more before AI capable of human-level intelligence is built. And, then, even longer before the sort of super-intelligence emerges that excites some, terrifies others and has provided plot lines for science fiction for decades. One may (eventually) lead to the other, but conflating today’s AI and machine learning with tomorrow’s Skynet is not helpful.
Indeed, that confusion has encouraged many to exaggerate the short-term potential of existing (and often somewhat mundane) AI and machine learning technologies. But it also means we may be under-estimating some of the real risks and potential downsides.
Today’s AI is helping companies to improve customer services or fine tune their decision-making by spotting trends in data that would otherwise be invisible, and helping them automate mundane tasks, or even create whole new services. You can read about the role of AI and machine learning in digital transformation projects in our latest in-depth special report. But there are also some pitfalls and risks too.
Here are a few potential issues with AI to consider.
AI is a fast-growing and intriguing niche, but it’s not the answer to every problem. In particular, beware ‘AI washing’: the application of those two letters to the branding of a product or service does not necessarily mean it’s better than one that does not mention those talismanic initials (see also: ‘cloud washing’).
In addition, a lack of skilled staff to make the most of the technologies, along with massively inflated expectations, could create a loss of confidence — this happened before in the so-called ‘AI Winter‘ and could occur again if these AI investments don’t pay off.
But perhaps more dangerous is the assumption that we treat AI as a magical, mystical source of truth. As the introduction to our special report makes clear, the output of the algorithm is only ever as good as the data put in, or the rules that humans set. The black box nature of algorithms that can learn and evolve in ways their human developers find hard to follow should not mean that their answers be accepted without question. Rather, ways must be found to make sure that AI-led decision making becomes as easy to understand — and to challenge — as any other type. Some researchers have set out ideas around how this can be done using factors such as responsibility, explainability, accuracy, auditability and fairness, and more work is needed here.
It’s also important to consider the impact of AI in its broadest sense: such technologies have the capability to significantly alter some jobs, create some and destroy others. The developers of this technology and its users need to consider and acknowledge the potential consequences, for good and for ill. AI-powered autonomous vehicles may cut pollution and make travel more efficient and fun — but may also put many drivers out of work as a result. There needs to be broader understanding of, and more debate about, these changes now.
Few of these issues are, of course, to do with the underlying technology of AI or machine learning; it’s mostly do with how we humans deal with change.
Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn.