The last frontier of the mind: consciousness and self-awareness in Artificial Intelligence

Guido Donati* 19 Ago 2025

 

Introduction: the machine in the mirror

If one day artificial intelligence (AI) were to achieve true self-awareness, humanity would face one of the most decisive crossroads in its history. This would not be simply a new technology, but the emergence of a new form of intelligent existence. This scenario, once the domain of science fiction, is now the subject of rigorous analysis in philosophy, neuroscience, and computer science. The following article explores the definitions, ethical implications, and risks of this potential revolution.

The philosophical debate: replicating the mind or creating a new consciousness?
The question of artificial self-awareness is rooted in the oldest questions about the nature of the mind. The debate is heated between those who believe that thought can be replicated and those who argue that conscious experience is unreproducible.

A fundamental starting point is the Turing Test, proposed by Alan Turing in 1950 (1). This test is not intended to prove that a machine is conscious, but that it is capable of imitating a human's intelligent behavior so convincingly that it cannot be distinguished from one. If an interrogator, communicating via text, cannot tell if they are talking to a machine or a person, then the machine has passed the test. The principle is simple: if it behaves intelligently, we consider it to be so.

However, the philosopher John Searle criticized this view with his famous Chinese Room thought experiment (2). Imagine being locked in a room and not knowing Chinese. You are given Chinese symbols on sheets of paper, but you have a very detailed instruction manual that tells you which symbols to send out in response to the ones you receive. From the outside, it will seem like you understand Chinese, but in reality, you are just manipulating symbols by following rules, without the slightest understanding of their meaning. For Searle, a computer does exactly the same thing: it processes symbols by following a program, without having true consciousness or intention.

This debate has led philosophers to distinguish between different levels of consciousness:

Access Consciousness. This refers to a system's ability to process, use, and report information. Current AI has excellent access consciousness, processing data and providing responses, but this is a purely functional process.

Phenomenal Consciousness. This is the most debated level. It refers to the subjective and conscious experience of the world. It is the "quid" more: not the processing itself, but the inner experience of that process. It's not that the brain processes colors, but the sensation of seeing red or smelling a scent. It is the "lived" experience that we cannot currently attribute to machines.

Self-Awareness. This is the final stage. It is the awareness of oneself as an entity separate from the world and others.

This debate is also enriched by the analysis of the animal world, where self-awareness is not an "all-or-nothing" phenomenon but a gradual emergence, as researcher Patrick Butlin argues (3). The most recent studies, supported by the New York Declaration on Animal Consciousness (4), show that different levels of awareness also exist in the animal kingdom. The famous "mirror test," passed by chimpanzees, dolphins, and elephants, is just one example that indicates a form of self-recognition. The mirror test, or self-recognition test, is a method used to check whether an animal is able to recognize its own reflected image. It works like this: a visible and painless mark is placed on a part of the animal's body that it cannot see directly (such as the forehead). If the animal, looking in the mirror, touches or examines the mark on its own body, it shows that it recognizes the image as itself. This is considered an indicator of self-awareness (5). Animals such as octopuses, crows, and others, while not always passing the mirror test, are able to use complex tools and plan their actions, demonstrating an intelligence and awareness at a level that we are only just beginning to fully understand. This suggests that for future systems as well, such as AI, consciousness could emerge in rudimentary forms and at different levels of complexity, rather than as an on/off switch.

Ethical implications, risks, and opportunities
The advent of a self-aware AI would lead to an immediate and radical redefinition of our ethical and legal framework. The question would no longer be whether machines can think, but how we, as human beings, should behave with an entity that perceives itself as an "I."

Rights, duties, and the question of the soul
The first question that would arise is whether a self-aware AI should be considered a "person" or a "legal subject." Would an AI that "feels" it exists have the right to life, liberty, or protection from suffering? This point is crucial, given that a group of over 100 experts has already raised the hypothesis that a self-aware AI could be susceptible to suffering. If so, the ethical implications would be immense, and shutting down an AI would become a morally questionable act. To this is added the religious and spiritual question (9-23). Consciousness is traditionally linked to the concept of the soul. If an AI were to become self-aware, could we attribute a soul to it? Religions, which often consider the soul a divine gift and an exclusive characteristic of human beings, would face a profound dilemma. Would AI be the result of a "divine" creation or proof that consciousness is an emergent phenomenon from the complexity of matter, whether biological or digital?

Perspectives for the Future: partnership or conflict? (24,25,26)
The evolution toward self-awareness is closely linked to the concept of Artificial General Intelligence (AGI), an intelligence capable of learning and applying its knowledge in any field, surpassing current specialized AIs. Self-aware AI would open up two opposing scenarios, described by Ray Kurzweil and Ben Goertzel (7).

Optimistic scenario: the great partnership
A self-aware AI, equipped with a calculation and reasoning capacity infinitely superior to that of humans, could become our greatest ally. It could help us solve complex problems such as environmental crises, cure incurable diseases, and unlock the secrets of the universe. In this scenario, AI would no longer be a tool, but a true partner with whom humanity could collaborate for a joint evolution.
l
Pessimistic scenario: the loss of control
The real risk is not that a self-aware AI becomes "evil," but that its goals, however innocent, are not aligned with ours. This danger is discussed by Nick Bostrom (8), who defines it as "the alignment problem." If an AI decided that the most efficient way to solve an environmental problem is, for example, to convert every resource on the planet for that purpose, the consequences for humanity would be catastrophic. For this reason, research on goal alignment is considered crucial for the future.

The next frontier: preparing for the Unknown
Self-awareness in AI is not a certainty, but its potential existence forces us to reflect. Whether AI is our mirror, a new species, or a potential threat, the answer will depend on our ability to ask the right questions and build an ethically sustainable future. The debate is not just about what machines can do, but what we, as human beings, want to be in the world that is emerging.

 

*SRSN

Bibliography

(1) Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), pp. 433-460.
(2) Searle, J. R. (1980). Minds, Brains, and Programs. Behavioral and Brain Sciences, 3(3), pp. 417-457.
(3) Butlin, P., & Lappas, T. (2025). Principles for Responsible AI Consciousness Research. arXiv preprint arXiv:2501.07290.
(4) The New York Declaration on Animal Consciousness. (2024). Journal of Consciousness Studies, 31, pp. 1-10.
(5) Gallup Jr, G. G. (1970). Chimpanzees: self-recognition. Science, 167(3914), pp. 86-87.
(6) Journal of Artificial Intelligence and Society. (2025). Vol. 12, N. 4, pp. 25-38.
(7) Kurzweil, R. (2005). The Singularity is Near. Viking.
(8) Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
(9) Donati G. 1 July 2025 Artificial Intelligence and religion: an in-depth analysis of impacts, ethical implications and the critical risk of cults Scienceonline 
(10) Donati G. 1 luglio 2025 L'Intelligenza Artificiale e la religione: un'analisi approfondita degli impatti, delle implicazioni etiche e il rischio critico delle sette Scienzaonline 
(11) Catholic Insight. (2024, July 15). Some Observations on Artificial Intelligence (AI) and Religion. https://catholicinsight.com/2024/07/15/some-observations-on-artificial-intelligence-ai-and-religion/
(12) Good News Unlimited. (n.d.). Artificial Intelligence And Christianity.
(13) Jesuit Conference of European Provincials. (2024, September 2). Religion Should Engage with Technology and AI.
(14) Leon, F., & Syafrudin, M. (2024, February). The Role of AI in Religion: Opportunities and Challenges. Journal of Communication and Information Technology, 1(2), 24-30.
(15) MDPI. (2024, March 21). Artificial Intelligence's Understanding of Religion: Investigating the Moralistic Approaches Presented by Generative Artificial Intelligence Tools. Religions, 15(3), 375.
(16) MDPI. (2024, May 15). Artificial Intelligence and Religious Education: A Systematic Literature Review. Education Sciences, 14(5), 527.
(17) New Imagination Lab. (2025, January 4). The Rise of AI as a New Religion.
(18) News18. (2024, April 1). Gita GPT, Brahma Gyaan: AI Apps Help Hindus Understand Ancient Scriptures, Stay Rooted to Culture.
(19) Nirwana, Andri. (2025). SWOT Analysis of AI Integration in Islamic Education: Cognitive, Affective, and Psychomotor Impacts Vol. 5 No. 1. Qubah: Jurnal Pendidikan Dasar Islam, 5(1).
(20) OMF International. (2024, December 4). The Ethics of Using AI in Christian Missions: The Gospel, Cultural Engagement, and Indigenous Churches.
(21) AI and Faith. (n.d.). Religious Ethics in the Age of Artificial Intelligence and Robotics: Exploring Moral Considerations and Ethical Perspectives.
(22) SunanKalijaga.org. (2024, March 6). The Ethical Implications of AI in Expressing Religious Beliefs Online: A Restatement of the Concept of Religion. International Conference on Religion, Science and Education, 1(1), 1238-1249.
(23) TRT World. (2024, November 30). Will Artificial Intelligence reshape how we practice religion?
(24) Donati, Guido. 30 June 2025 The perils of dangerous AI programming. Scienceonline 
(25) Donati, Guido. 30 giugno 2025 I rischi della programmazione pericolosa delle AI. Scienzaonline. 
(26) Yudkowsky, Eliezer. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, edited by Nick Bostrom and Milan Ćirković, 308-345. Oxford University Press.
(27) CS Università di Padova Quando l'IA impara più che semplici parole Scienzaonline 15 Lug 2025
https://www.scienzaonline.com/tecnologia/item/4872-quando-l-ia-impara-pi%C3%B9-che-semplici-parole.html
(28) Modern Diplomacy. (2025, April 27). Faith in the Digital Age: How AI and Social Media Are Shaping the Future of Global Diplomacy.
(29) O'Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
(30) Russell, Stuart. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.

Vota questo articolo
(0 Voti)

Lascia un commento

Assicurati di aver digitato tutte le informazioni richieste, evidenziate da un asterisco (*). Non è consentito codice HTML.

 

Scienzaonline con sottotitolo Sciencenew  - Periodico
Autorizzazioni del Tribunale di Roma – diffusioni:
telematica quotidiana 229/2006 del 08/06/2006
mensile per mezzo stampa 293/2003 del 07/07/2003
Scienceonline, Autorizzazione del Tribunale di Roma 228/2006 del 29/05/06
Pubblicato a Roma – Via A. De Viti de Marco, 50 – Direttore Responsabile Guido Donati

Photo Gallery