A lot of people commented during and after the recent Ethics of AI conference at NYU that we still don’t know what the necessary conditions for consciousness are, and that this problem lingered like an elephant in the room. The implication seemed to be that this problem cast a pale on a lot of the work that was discussed at the conference. One commentator even summarized the conference as a road to nowhere at least partially because of this issue.

The ‘conditions for consciousness problem’ is critically important, and the reasons for this were articulated especially well by the panelist Susan Schneider. Several important ‘forks in the road’ in the future development of mankind hinge on whether we think the AIs we create are conscious:

Problem 1: Should AIs be granted personhood?

Consciousness is normally taken as one of the criteria for personhood, or the ability to have rights. Most of the speakers worked in the popular consequentialist and utilitarian framework and under that framework what is important is the ability to feel pain and suffering. It is hard to imagine that consciousness in AI could exist without some form of ‘pain’ or ‘suffering’ alongside it, since AI systems always have have goals/values/preferences, and mechanisms by the which the AI knows whether it is obtaining them through its actions. The question of whether the exact experience “qualia” of this pain are knowable to us or at all similar to how we feel pain seems to be independent from the question of whether the pain exists. We can define pain as a mechanism that modifies the conscious experience of an agent to alert it when its value function is decreasing. Brian Tomasik has even suggested that current day reinforcement learning (RL) algorithms may feel pain, at least to a very limited degree, and therefore RL algorithms require (at least to a small degree) our ethical consideration.

If we accidentally mischaracterize AIs as not conscious when they actually are, then this will lead to the risk of what Nick Bostrom calls “mind crime”. Mind crime can either be inflicting pain on AI or terminating them without their consent. There is also the tricky issue of AI slavery. The scale of mind crime could pale in comparison with historical instances of genocide, since it is easy to spawn up and then delete millions of AIs in the process of their development. (for instance in an evolutionary algorithm or during training). If we answer question 1 incorrectly, the potential for slavery and suffering in AI agents is very large.

Avoiding the downside risk of a wrong answer to problem 1:

The risk of answering question 1 incorrectly can be avoided for now by erring on the side of caution. Note that we grant personhood to fellow human beings without being 100% sure they are conscious. We have neural-correlates for consciousness, (obtained by asking subjects ‘are you conscious’), but we don’t have anyway of strictly knowing whether a person is really conscious or a philosophical zombie. However, we recognize ourselves as conscious and have no reason for thinking that we are special relative to other humans. We see in others outward attributes that in ourselves are deeply intertwined with a conscious inner mental life (intelligence, self awareness, emotional response, etc) and we therefore treat others as conscious agents even though strictly we are not sure that they are. Nothing prevents us from using the same reasoning to err on the side of caution when it comes to AIs. This point was made quite wonderfully by Captain Picard in the Star Trek: The Next Generation episode “The Measure of Man”, where Data (an android) goes on trial:

The difficulty lies in broadening the ‘circle of empathy’ to include AIs. Given the way that mankind currently treats animals terribly (even though animal brains are very similar to our own) I am not very optimistic that we will be able to err on the side of caution with how we treat AI. Afterall, our genes only want us to empathize with organisms we are genetically similar to and look like us (a point which Daniel Kahneman alluded to). Fortunately, while our genetically-driven gut feelings (system 1) may tell us that AIs / androids are not worthy of ethical consideration, fortunately we also have the ability for rational thought (system 2) which allows us override our instincts.  By using reason, we have come a long way since the time of Descartes when it comes to at least recognizing animals rights and suffering, although obviously much more progress remains to be made.  Current trends indicate that the circle of empathy will continue to widen; the more pressing problem is whether the circle can expand fast enough to deal with the advent of AI. 

Problem 2: should we be willing to bequeath our “cosmic endowment” to AIs? (or in the extreme case, allow homo sapiens to go extinct?)

Superintelligent AI, irrespective of whether it conforms to human goals and values, would be able to rapidly develop technologies (such as nanotech, aerospace tech, information technology, and robotics). This fact, coupled with the fact that AIs can exist in more durable substrates than human biology, would allow AIs to quickly spread through the solar system and the galaxy. Billions of tons of lifeless matter exist in our local stellar neighborhood, along with enormous amounts of energy being radiated by countless giant fusion reactors (ie. stars). All of this matter and energy is currently lying dormant, waiting for some lifeform to utilize it. Life, after all, is distinguished from non-living systems in that it is capable of using free energy to reconfigure matter into specified forms, and therefore reduce entropy locally (also called creating negentropy or exptropy). In the face of the prospect of a superintelligent AI expanding outward through the galaxy and utilizing the cosmic endowment for its own ends, it is clearly important that we decide under what scenarios (if any) this might be ethically a good thing or not. If AI is conscious, then this might be considered ethically good. Future AIs might even be far more ethical than humans and thought of as worthy of the cosmic endowment. However, if AIs are not conscious, the thought of unconscious machinery spreading outwards through the galaxy and eating up the cosmic endowment is something that I think most people find disturbing, and not just because humanity itself could easily be wiped out in such a scenario. We prefer that the ‘the light of consciousness’ is able to explore the universe and partake in the cosmic endowment rather than unconscious machinery. 

Avoiding the downside risk of a wrong answer to problem 2

The way we avoid the bad outcome of mis-answering question 2 is simply to keep AI tightly under our control until we solve the conditions for consciousness problem. Keeping AI under control also avoids the potential for a war between people having different answers to question 2 (the so called ‘artilect war’ proposed by Hugo de Garis).

The tension

Note that there is tension here – we err on the side of caution with problem 1 by taking into account that that AI is may be conscious, while we err on the side of caution with question 2 by acknowledging that AI may not be conscious. There isn’t a contradiction though, we are simply acting as best we can in each case to ensure the best possible future for humanity, given the limited understanding that we currently have. In the words of Jaan Tallinn, “AI research is the search for the best possible future for the world”.

Conclusion

The conditions for consciousness (c4c) problem is important, but is independent from the AI control problem. We can avoid having to solve the c4c problem by erring on the side of the caution to avoid the morally repugnant scenarios that come from answering questions 1 or 2 incorrectly. However, we can’t avoid not solving the AI control problem. AI is being developed now and will continue to be developed, and if we don’t control it, it will control us. Both the c4c problem and the AI control problems are very important, but the control problem seems more pressing.  I believe that is why the speakers focused mainly on the control problem rather than the conditions for consciousness problem.  It’s OK for that elephant to sit in the room, as long as everyone acknowledges it is there, which I think everyone did.

Quick Intros to the AI control problem