ABSTRACT

The confusion around the term “Artificial General Intelligence” (AGI), often trapped and disputed between the marketing and research fields, deserves to be defined and analyzed from an ethical perspective. In 1980, American philosopher John Searle published an article in which he argued against what was then called “strong AI.” Following the legacy of Alan Turing, the question Searle posed was: “Is a machine capable of thinking?” (Searle, 1980). To briefly summarize the experiment, the philosopher illustrated a thought experiment known today as “the Chinese room” to attempt to answer his question. The thought experiment consists of imagining a room in which Artificial Intelligence (AI) has at its disposal a set of documents (knowledge base) with Chinese sentences in it. A native Chinese speaker enters the room and begins to converse with this AI; the latter can answer, considering it can easily find which sentence corresponds to the questions asked. The American philosopher’s argument is simple: although AI can provide answers in Chinese, it has no background knowledge of the language. In other words, the syntax is not a sufficient condition for the determination of semantics.