A lot of people commented during and after the recent Ethics of AI conference at NYU that we still don’t know what the necessary conditions for consciousness are, and that this problem lingered like an elephant in the room. The implication seemed to be that this problem cast a pale on a lot of the work that was discussed at the conference. One commentator even summarized the conference as a ‘road to nowhere‘ at least partially because of this issue.
The ‘conditions for consciousness problem’ is critically important, and the reasons for this were articulated especially well by the panelist Susan Schneider. Several important ‘forks in the road’ in the future development of mankind hinge on whether we think the AIs we create are conscious: