Is AI ready to compete with the human workforce?
Part 3: The Social Hurdle
The social impact of Artificial Intelligence hinges on the potential it has for both competition and collaboration with the human world. If left to develop unchecked its impact can be far-reaching and deeply destabilising. Maybe we should look before we leap, and consider the consequences of letting these new, more powerful and more autonomous machines loose on our world?
AI (and related technologies such as Machine Learning, Robotics and Autonomous Vehicles) is reaching a level where it can replace more and more human jobs. As its intelligence and the domains it can operate in increases it will outcompete human workers on costs and reliability. Since businesses are always looking for ways to reduce costs and improve efficiency, as soon as an AI application is deemed capable of reliably replacing human workers, it will. Optimists will say that this is simply a displacement. New jobs and skills will off-set the loss of the old ones and the net result will be neutral or even positive. Pessimists will say that this time the net result is much more likely to be negative. AI will rapidly take over the last bastions of human skills – our cognitive and creative abilities – leaving less and less jobs only humans can do.
And it’s not that AI technology will just reduce jobs. It will replace many tasks people perform and drive changes in the composition of most jobs. We are all task workers of one kind or another. Scrambling the mix of tasks will have a far broader set of impacts on society. Will this contribute to faster business generational change and shorter business longevity – as already seems apparent?
Whatever the extent of it, there will be a significant displacement and reconfiguration of jobs and skills. We only have to look at the past waves of industrial revolution to see how much of a problem that can be. Even when the net-result was more jobs for more people, the transition to a new balance of people and skills was never easy.
The introduction of new technologies undermined whole segments of the existing economy. This caused mass-unemployment and forced migrations of people desperately seeking work. In the midst of growing wealth and prosperity, unequal access to the benefits of new technologies caused exaggerated inequality of distribution of that wealth. Only major wars could level that inequality, it seems (see Piketty1), at the cost of millions of human lives and untold suffering of millions more.
Inequality of wealth is again becoming a major issue, driven by the unequal access to disruptive technologies such as AI, ML and Big Data. Those that got in early are so far ahead that they can effectively reap most of the benefits while blocking others from competing on equal grounds. Even if AI is a long way off from competing with humans on all levels of skills and labour, its potential for social disruption and displacement is real and dangerous enough.
But competition with humans is only part of the story. The more capable, universal and autonomous AI becomes, the more its applications will become agents in our social interactions. Till recently we would use machines and applications as tools and channels for human interactions. But we did not see our tools as participants in those interactions.
We used our phones to text and talk to people. Now our phones are learning to talk back at us. Are we ready to have meaningful conversations with them? We would drive our cars through busy streets complaining about all the other drivers. Soon our car will be driving itself and all the other cars will be equally driverless. Will there be any point in our moaning about their driving skills?
Talkative phones and self-driving cars are only the beginning. The fabric of our society is based on the idea of human agency and human autonomy. We have built many structures, institutions and mechanisms to help us manage the enormous complexity of society. But behind all those constructs we know there are human minds at work. People are making the decisions, keep oversight, control and steer things. We may be dealing with a bank for our financial needs, but we assume there are people in that bank handling our money and our interactions. We may at times feel that the machines of government act like insensitive juggernauts without any human feeling. But we still assume there are people behind that inhumane facade. We still assume that our government is designed, managed and operated by humans like ourselves.
What will happen to the fabric of our society when we introduce a completely new class of agents into the mix? How will we react to machines and applications making decisions for us. How will we deal with governments and justice systems when we can no longer be sure there are people like us involved in how such institutions affect our lives?
Our society has always adapted to every new wave of technology coming in. Many years ago, people had right of way on thoroughfares. It was the auto industry and lobby that changed the natural order and allowed cars to have right of way on roads. What conceivable concessions to the new AI-powered machines will we see in the next decade or two? What rights will we be asked to concede to the machines so they can function more efficiently?
The issue is not just that AI may soon be smarter than us. We have always had differences in intelligence and eduction within our societies and found ways of dealing with that. The real issue is the fundamental otherness of AI compared to everything else we have been used to. AI does not ‘think’ like natural organisms do. AI may have conversations with us, but the way they ‘understand’ and ‘relate’ to us has very little in common with way we listen and talk to each other. AI may appear to use information and evidence to come to ‘conclusions’ and ‘decisions’, but the underlying processes are very different from how our brains do this. And AI, unlike us humans, is not localised and confined to a single body and mind. It may look to us we are communicating with a single phone or being driven by a single car. But the AI that powers those single agents is distributed across vast networks and accessing information far beyond anything we – as individuals – have access to.
Never before have we faced the challenge of building a society with such alien intelligence included. We have difficulty enough dealing with people from different cultures and ethnic backgrounds. Can we really assume we are ready to deal with the much deeper otherness of AI?
By bringing up these potentially negative consequences of AI, I am not arguing we should abandon AI altogether. There are at least as many potential benefits as there are dangers. What I am arguing for is more attention for and understanding of the dangers that come with any powerful and disruptive technology. The human race has a history of seeing the benefits first. By the time the consequences become visible, much damage has been done. In many ways technology has improved the wealth and well-being of the human population. But that progress has not been without its costs. Before we make another leap into the unknown, we may want to make certain we know what the costs of yet another technological jump forward actually are.
We can let AI disrupt and unsettle society, amplify existing inequality, further concentrate power and control and further alienate and isolate people from each other. Or we can take responsibility for our future and find ways to do the opposite. Let’s look before we leap.
- Piketty, T. (2017). Capital in the Twenty-First Century. Harvard University Press. ↩