ABSTRACT

We now have a procedural solution to our question, “Can a machine be conscious?” To attribute consciousness to some entity, we follow the following procedure. We ask:

1. Is this entity a rational agent? That is, does this entity have (a) independent purpose regardless of its contact with other agents, (b) the ability to make what we’ve called interagency attributions on a pure or natural basis? And does it in addition have (c) the ability to learn from scratch significant portions of some natural language, and the ability to use these elements in satisfying its purposes and those of its interlocutors? If the entity does not meet these criteria, then the attribution of consciousness to it will be at best dubious. The only plausible circumstances in which some consciousness might nevertheless be attributed to beings that do not fully meet these criteria are those involving biologically related species or organisms or devices constructed similarly to organisms or devices that meet not only these three elements but the other considerations to follow as well. Thus being able to meet the criteria of point 1 is a necessary condition for consciousness attribution.

2. Are there any paradigmatically conscious beings such as humans who have functionally preemptive causal systems? If so, does a device that meets the criteria of point 1 also have functionally preemptive elements in the material embodiment of its cognitive processor? If the device does not have these elements whereas the paradigmatically conscious beings do, then there are grounds on which to withhold consciousness attributions to it. On the other hand, if the paradigmatically conscious beings do not have functionally preemptive causal systems, then the lack of same in the entities in question, or the differences in their material base, 105or the differences in the manner in which they carry out cognitive functions from humans, cannot be held to be grounds for withholding consciousness attributions from them. As to the question of how to treat this issue in the absence of adequate understanding of the human brain, it would seem that only positive evidence of as unusual and miraculous an arrangement such as functional preemption would be grounds to work from. Assumptions such as Sear le’s, that the material base of brain consciousness will be discovered to be as far removed from mechanism as quantum physics is from Newtonian mechanics, is not at the moment justified, to put it mildly.

3. On the assumption that the human brain is a system in which physics creates or serves function, then some consciousness may be attributed to any system that fully meets the three criteria of point 1. As I’ve argued in Part 1, we’re already in a position to build machines that meet those criteria; hence we’re also in a position to build machines with some degree of consciousness (so long as the above assumption holds true). However, we must also ask, “Does the entity exhibit various consciousness-associated factors such as emotional life, wakefulness, a sense of continuity with its past, and the development of the ego conscious stance?” The more it does so, the more degrees of consciousness may be attributed to it.