Given the enormous expectations that are being created, a backlash against AI is likely over the next 3 years when the fragility of standalone systems will become clear. You should take action now in view of that.

AI climatology

The last “AI winter” came at the end of the 1980s. The technology saw R&D and business investments freeze once it was realized that AI systems did not possess even childish common sense, could only function in very specialized tasks, and in most cases were too expensive anyway.

One day in 1985, an expert system that had been beating the best cardiologists in 95 percent the diagnoses, declared a patient dead upon reading the electrocardiogram: the hardware had run out of ink and presented the optical-reading camera with a totally white strip of ECG, which led the rule-based software to think that a prolonged cardiac arrest was in effect.

After all, the magnificent Google Translate still fails today to render in other languages the classic written trick «I saw the Statue of Liberty flying over New York», revealing that in a robot’s mind, despite decades of attempts with various AI technologies, statues can fly like aircraft and birds.

The resurrection

A new AI spring was started by the DARPA Grand Challenges programme 2004-2007, with autonomous ground vehicles shown in action on tv, and by Watson‘s incredible Jeopardy! win in 2011, with IBM’s subsequent billionaire investment in an ad-hoc business unit. That same year, Nevada authorized driverless vehicles on some roads.

There followed increasing substantial investments by venture capital and companies; the appearance of AI features in smartphones and gadgets; the offering of Machine Learning As A Service; the founding of OpenAI to counter «humanity’s biggest existential threat» (in Elon Musk’s words); and many other interesting developments, including the phenomenal and exponential growth of the AI fanfare in which we are now immersed.

Revered AI industry executives, avidly magnified by the trade press, ludicrously declare that «human-level AI will be passed in the mid-2020s»

The McKinsey Global Institute forecast a world GDP growth of 16 percent by 2030 as a result of the impact of AI on the economy. PwC agrees. And the Government of China seems to act on forecasts just as big.

Lawyers have held congresses to decide how to adapt the law to machine learning. The European Commission, recognizing in AI «one of the most transformative forces of our time, […] bound to alter the fabric of society», has issued lengthy guidelines on how to ensure its «ethical purpose».

What can go wrong and how

Clearly, a public fiasco today could be a blow for the AI industry, because inflated expectations usually prompt disproportionate -and often unfair- allegations if anything goes wrong.

Think recurring and massive misdiagnoses in hospitals due to AI. Think multiple deadly accidents involving autonomous cars. Think unwitting AI-generated fake news on principal media. Think a mass killing of innocents caused by an AI-driven weapon, making it to the public.

AI systems are fragile: their accuracy can occasionally decay suddenly when the boundaries of their specialized competence near, and common-sense becomes paramount…

…like knowing that statues don’t fly, that blank paper can mean no more ink, or that a penis drawn over a STOP sign does not change its fundamental semantic.

The brittleness of unattended AI systems

It has been shown that minor perturbations like slightly modified eye-glasses or feeble pencil lines can cause neural networks to no longer recognize faces or road signs; and that inserting «non co-occurring categories» like an elephant in a previously-recognized room interior can lead the AI system to take the scene for something entirely different.

(Work is underway to mitigate the effects of such adversarial examples attacks to machine learning systems, however we’re far from safe solutions).

That is: an AI system can still fail miserably like in 1985, abruptly and without warning.

In situations like health care, shop floors or journalism, robots (whether hardware or software) are usually flanked by competent humans, thus catastrophic results will be extremely unlikely.

But an autonomous car or a weapon take decisions by the millisecond, and in many cases no human can possibly override them on time.

AI fiascos will happen because unattended AI systems are prone to severe if rare malfunctioning.

AI’s efficiency and its reliability/robustness are two different features, and must be evaluated separately.

It will happen. So what?

By the end of 2021, a few but resounding failures by standalone / unattended AI systems are likely to happen and provoke a huge media and political backlash against AI.

At which point, anything “AI” will appear dangerous or unseemly in the media.

The AI industry will be robust and resilient at the time, so perhaps it will not hibernate as in the past. But the crisis will hit its reputation and have a negative business impact for years.

Soon you will become involved in deploying a new application based on AI or heavily dependent on it. It could be business or it could be aimed at innovating or inventing a public service or utility.

Describe and promote your project on the grounds of its business or social validity, not the technology underneath.

Unattended AI systems should be limited to the strictly necessary, and risk/benefit carefully balanced.