Reading time: 4:30.

There’s an Easter Rabbit hiding in plain sight in MIT Technology Review’s AI flow chart

Anthropomorphizing

When you talk about a thing or animal as if it were human, you’re anthropomorphizing. The Easter Bunny is an anthropomorphized rabbit.

Net:

Avoid, shun, be hypercritical of anthropomorphic thinking in general (and related to AI in particular) unless you are a:

  • Philosopher
  • Creative
  • Entertainer
  • Researcher (in areas such as biology, psychology or computer science)

Let’s get real

Real rabbits are not very much like the Easter Bunny.

I live part of the year on Cape Cod in Massachusetts. In my town, there are wild Coyotes. Would it make sense for my town to come up with a plan for dealing with wild Coyotes by studying movies of Wily Coyote?

MIT Technology Review (MTR) recently created a “back of the envelope” flow chart to help readers determine if something they’re seeing is Artificial Intelligence. The only thing wrong with the flow chart is … almost everything! The flow chart is chock full of anthropomorphic thinking.

[I am a faithful reader of MTR, in both digital and paper form. I subscribe to it and enjoy reading it but that doesn’t make it perfect.]

MTR says

AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can.

Would that this were true! It’s not.

Reality

AI doesn’t do things for itself. People do. Let’s look at the roles people play in AI. 

  • Know a lot about the specific project
  • Provide the right algorithms (instructions that tell the technology what steps to take.)
  • Build models (containing algorithms, places for data and adjustable parameters that dictate the behavior of the model)
  • Gather the right data sets and ensure they fit the needs of the model and the project (and don’t contain unintended biases)
  • Tag the data elements in the data sets (to identify what the algorithms should pay attention to)
  • Force feed the data into the models to train them or write algorithms telling the models where and how to access the training data)
  • Test the trained models and repeat this process over and over again until it “works right”
  • Sometimes use more automated processes that consist of algorithms and data

Training itself is a misleading term. The models contain algorithms that perform calculations on the input data that result in the model being able to discriminate between similar inputs in the future. Once trained on a data set, AI technologies are unable to generalize broadly.

Natural Intelligence, provided by and demonstrated by humans 

In AI, we are seeing human (natural) intelligence at work, building tools that can “outperform” people under some conditions.

Just as telescopes are tools that improve the distance vision of people and hydraulic jacks are tools that increase the physical strength of people, so too AI technologies are tools that help people detect patterns they could not detect.

Ineffective Decision Tree

Let’s examine one branch of the MTR flow chart, “Can it reason?” Here’s the logic it suggests:

If the reader says NO (it can’t reason) then go back to START. It can’t reason.
Else Yes then “Is it looking for patterns in massive amounts of data?” 
If NO then “…it doesn’t sound like reasoning to me” go back to START.
Else Yes then “Is it using those patterns to make decisions?”
If NO then “…sounds like math” go back to START
Else Yes then “Neat, that’s machine learning” and “Yep, it’s using AI”

How does one answer the first question, “Can it reason?” The reasoning comes from the natural intelligence of the designer of the program, a human.

How do you know the technology is “looking for patterns in massive amounts of data?” How do you know it’s the technology that’s somehow doing that as opposed to the technology blindly following the programmer’s rules?

How do you determine whether the technology is using the patterns to make decisions?

The flow chart is ineffective because if you examine the specific decision points, there is no guidance on how to determine Yes or No at any of the branches. The chart fails to deliver useful results.

So What Good Is Artificial Intelligence?

Artificial Intelligence can

  • Provoke philosophical inquiry
  • Stimulate creative imaginations
  • Create great entertainment and fiction
  • Inspire researchers (such as biologists, psychologies and computer scientists) to come up with ever improving technologies that appear to be smart or intelligent

Philosophers, creatives, entertainers and researchers should continue pursuing the quest of creating an Artificial Intelligence. That does not mean that anyone should believe we have already created a “true” Artificial Intelligence (whatever that would be.)

Modern AI research has its roots in a 1955 proposal by McCarthy, Minsky, Rochester and Shannon for a Dartmouth Summer Research Project on Artificial Intelligence. They said

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Note the proposal was based on conjecture! That means “suppose the following as if it were true.” That doesn’t mean it is true. It’s like a middle school science teacher asking her students to “suppose gravity stopped working — how would that affect people?” 

Also note that 63 years after the AI conjecture was published:

  1. We have made great progress!  Pushing researchers to create ever improving technical capabilities (whether labelled AI or not) has produced great progress in building technologies that can solve some problems previously reserved for humans and other problems that not even humans could handle before.
  2. We do not understand the elements of human intelligence well enough to simulate it.

For a better understanding of the limitations of AI as it stands today, look at

  1. Gary Marcus controversial paper on the limitations of deep learning
  2. An earlier blog post I wrote on AI technical maturity issues.
  3. Yann LeCun’s lecture on the Power and Limits of Deep Learning