Reading time:  4 minutes 45 seconds

Interest in the Ethics of AI, as reflected in Google Web Searches, has grown dramatically over the past couple of years.

In the past week, many publications, including the New York Times, and Wall Street Journal have written about the Ethics of AI and applying ethical standards to real-life situations.

Here are seven rules to consider for dealing with technology ethics.

Rule 1: Apply these rules and recommendations to all systems

Scope: You should consider the potential ethical problems in all classes of systems, independent of the sophistication of their algorithms, presence or absence of AI technology, procedural or data-driven focus, technology-rich or technology-free implementation, purely human of nature or a mixture of human and technology.

Action: Inspect both paper-based procedure manuals and computer algorithms for potential ethical issues. Human systems and behaviors reflect often unconscious biases that we should consider unethical. Discriminating between people based on gender, race, age, religion, country of origin or physical disability is an example.

Go beyond AI. Here’s a quick take on AI:

AI is an endeavor to simulate (or surpass) the intelligence of people without really understanding the essence of human intelligence. (Which is OK as a premise but let’s not fool ourselves into thinking we have any idea of how to really do this.)

Source: Enterprise AI Assumption 1 (The Analyst Syndicate)

AI is a vogue term for advanced algorithms that amaze us, doing things we thought technology couldn’t do. People have been fantasizing about AI for nearly 10,000 years.  There are very valid concerns about autonomous weapons that I wrote about recently. But they’re not new and they don’t require AI:

  • Autonomous weapons, for example, existed before we had any technologies known as “artificial intelligence.” They date back to land mines of the 1600s and naval mines of the 1700s. More recent examples include autonomous machine guns installed within the perimeter of the Berlin Wall.
  • Some of the ethical concerns specific to AI are the product of fertile fiction writers imagination, not the realities of AI technology.

Don’t limit ethical consideration to algorithms either. For example, include ethical issues related to:

  • Data: classification and tagging, inclusion and exclusion, production, retention, context and openness to inspection by others.
  • Usage guidelines: interpretation, judgment and freshness
    • Interpretation: which conclusions can be drawn from a result and which conclusions would be inappropriate
    • Judgment: latitude, other factors to consider,  alternate ways to come to a sensible conclusion
    • Freshness: What was ethical in the past may be unethical in the future and the converse is also true

Rule 2: Force practical ethics issues into public visibility. Sunlight is the greatest disinfectant.

Louis Brandeis, former Associate Justice of the Supreme Court of the United States, said  

Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.

The Washington Post says “Democracy Dies in Darkness.”

It’s even deeper than that: society ultimately dies in darkness. Shine the light.

Suppressing visibility into certain AI research because some of its uses appear, at least to some, to be unethical, may increase the chance the unethical uses will be developed, exploited and hidden. This is one of my concerns about action by OpenAI to hide some of their GPT-2 technology and data, described in more detail under the section header “Disappointing News — GPT-2 OpenAI story” in my Daily Beats #3 post.

Rule 3: Use the full range of approaches to deal with bias

Many forms of bias exist. Some are blatant, as in sticking your thumb on the scale to change a major decision. Some are perceptual or cognitive. At least 192 different cognitive biases have been identified.

Detecting biases can be a difficult task.

Determining which are ethical and which are not can be even more difficult.

Not all biases are universally unethical. Consider for example a society that creates financial incentives for citizens to save for retirement. Is it behaving ethically? Is that ethical to all, most, some or none of its citizens?

Society’s perceptions of ethics and bias evolve over time. They can also vary depending on a particular context. A core assumption behind situational ethics is that there are few if any absolute moral standards. Ethics and morality are context dependent.

Recommended Actions

  • Respect data privacy regulations and best practices.
  • Crowdsource analysis of data and plans. Disclose intended uses for the data. Respectful of privacy concerns make sanitized versions of the data publicly visible so anyone can examine it and identify ethical concerns.
  • Hire philosophers, ethicists and social scientists to more broadly inform your analysis and decision making.
  • Empathy and isolation matter. Establish an office of the ombudsperson to collect ethical concerns from inside and out. They should protect the privacy of concerns raised to them.
  • Engage outside auditors to uncover data and algorithmic bias in your enterprise’s processes (technical and human).
  • Explore bias-detecting technologies in the market today (e.g., Google, IBM, and Accenture.)

Rule 4: Employees at all levels have rights and responsibilities.

Each employee has a responsibility to surface ethical concerns. In good faith and with no retribution allowed. Ultimately, each individual has to vote their conscience and, where they believe it is justified, seek more suitable employment.

Rule 5: Treat multi-use technologies fairly.

These are not unethical. But certain uses may be. For example, facial recognition technology can improve safety while reducing security barriers. The same facial recognition technology could also be used for unethical purposes.  See rule 4.

Rule 6: Government cooperation

A business can choose to do business with various governments and government activities — or choose not to. For example, there was a protest inside Microsoft recently over their contract to sell HoloLens2 devices to the US military.

Microsoft CEO Satya Nadella defended their decision saying

“We’re not going to withhold technology from institutions that we have elected in democracies to protect the freedoms we enjoy”

There are a lot of things governments have done that are ethical and other things that are unethical. You can make up your own list. See my January post about autonomous weapons. They’re not going away.

In most nations where civilians rule the military and the government follows the rule of law, the military is not the enemy.

If you’re concerned, do more about the quality of the civilian government and the institutions therein. And get more involved in the political processes.

If the situation is sufficiently difficult from an ethics point of view, find a firm that explicitly avoids government business and go there.

Rule 7: Culture counts

Having values and ethics is great but they only mean something if they are adhered to even when it seems to go against the direct self-interest of the organization.

That means encouraging a culture not just of openness but also of encouraging and celebrating tough decisions, for instance about not closing a juicy sale because it would violate our values or ethics. It means always welcoming (at least internally) complaints, critical questions and even whistleblowers (although whistleblowers should not be necessary if there is true openness) and never punishing people for erring on the side of being ethical.

Actions under Rule 3 should apply to employees at all levels of the organization.

Disclosure: This post is my own opinion. I wrote it myself but Bard Papegaaij deserves credit for Rule 7.  I have not been compensated for it by anyone.