Reading time: 5:00

Good News!

AI isn’t as smart as many people think. Recognizing that lets us understand some of the economic dilemmas we see. It should also allow us to get closer to ‘truly autonomous vehicles’ (and other automated processes) more quickly than we otherwise would.

Actions

  • Treat AI as a very powerful but very limited set of new technologies. Don’t treat it as a specialized power to give machines the intelligence of humans.
  • Don’t wait for machines to be perfect. Inject people into the equation to make machines more capable.
  • Inject machines into the equation to make people more capable.

1. A breakthrough on the economic impacts and non-impacts of automation and AI

Google scholar informed me this morning of an inspiring new paper Digital Abundance and Scarce Genius: Implications for Wages, Interest Rates, and Growth by Benzell and Brynjolfsson. My interpretation of their paper is pretty straightforward and simple. Artificial Intelligence and other automation techniques are just not good enough to deliver the special twist that gifted people (or people in gifted environments) provide.

In their language, here’s their abstract:

Digital labor and capital can be reproduced much more cheaply than its traditional forms. But if labor and capital are becoming more abundant, what is constraining growth? We posit a third factor, ‘genius’, that cannot be duplicated by digital technologies. Our approach resolves several macroeconomic puzzles involving automation and secular stagnation. We show that when capital and labor are sufficiently complementary to genius, argumentation of either can lower their price and income shares in the short and long run. We consider microfoundations for genius as well as consequences for government policy.

For the better part of a decade, I’ve been trying to stop people from labeling these technologies “artificial intelligence” because they’re not AI (that’s the stuff of fiction.) Today’s AI is a collection of amazing innovations that lose some of their amazingness as we realize how truly limited they are. They’re profoundly great compared with where we were before their invention (by people) but they’re also profoundly limited (versus the image of ‘Machines that have been given intelligence by their human creators’).

2. Human assisted fleets of autonomous vehicles

A new report out of the University of Michigan gives me hope that we will get autonomous people-transport vehicles, what’s referred to as ADAS-5 autonomy, sometime in reasonable future.

How? We may have human emergency-controllers operating in the background to assist these vehicles when their algorithms ask for help.

  • There will be no drivers in the vehicles.
  • The vehicles will not be remotely driven (the way Predator drones are controlled today.)
  • There will be people who can service remote requests from these machines, issuing instructions on what to do.

The use of people to assist machines might disappoint people with AI fantasies but it should allow us to make progress on the move to driverless vehicles more quickly and at lower risk.

Over time, the frequency of help requests from these vehicles will drop as their programming gets more sophisticated.

Do we ever get “truly autonomous” vehicles with no human involvement in the driving process? Not in our lifetimes. How close do we get? Zeno’s Dichotomy Paradox may apply: we will eventually get close enough that no one will see a difference and we’ll accept the claim of total machine autonomy. Again, not in our lifetimes.

Disappointing News — GPT-2 OpenAI story

OpenAI’s advances in Natural Language Processing (including text generation in answer to questions or other user-provided context) are a strong plus. The brouhaha that’s emerged over their decision to not release their full model and dataset is disappointing. Their GPT-2 research resulted in a state of the art Natural Language Generator.

Details of OpenAI research

Training

It was trained with 8 million pages (40 gigabytes) of internet text, ten times as much text as its OpenAI research predecessor. They described the dataset acquisition process as follows:

We created a new dataset which emphasizes diversity of content, by scraping content from the Internet. In order to preserve document quality, we used only pages which have been curated/filtered by humans — specifically, we used outbound links from Reddit which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting (whether educational or funny), leading to higher data quality than other similar datasets, such as CommonCrawl.

Results

They report on successes and failures with their new GPT-2 model:

Our model is capable of generating samples from a variety of prompts that feel close to human quality and show coherence over a page or more of text. Nevertheless, we have observed various failure modes, such as repetitive text, world modeling failures (e.g. the model sometimes writes about fires happening under water), and unnatural topic switching.

Positive and Negative Uses

They anticipate significant improvements, based on their research, in AI writing assistants and natural language dialog. They also express reservations about potential malicious uses such as “DeepFake News” and impersonation of others on line.

OpenAI’s reservations about possible consequences

These samples have substantial policy implications: large language models are becoming increasingly easy to steer towards scalable, customized, coherent text generation, which in turn could be used in a number of beneficial as well as malicious ways.

Decision

Given the possibility that such models can be used to generate deceptive, biased, or abusive language at scale, they decided

We are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights …

This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas. …

We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems….

We also think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems. …

We will further publicly discuss this strategy in six months.

We’re not sure where the primary source of disappointment resides.

  • Some see either the capability or the risks overblown.
  • Others expect the brouhaha to incent more researchers (and others) to reinvent what OpenAI has done. This may raise the likelihood that the technology will wind up “in the wrong hands” generating more “DeepFake News.”
  • People generate DeepFake News already. We’d like to see more work on better tools to detect it. We would love to see more research on how to use GPT-2 and similar to detect fake news (a difficult problem for technology and humans alike.)
  • There’s a certain irony in OpenAI decided to not be open with what it developed.

The biggest risk we see is a constriction of the free flow of information about advanced algorithms such as this. It’s a sliding slope.

Disclosure: This post is my own work. It represents my opinions. I have no vested interest in any entities identified or implied above.