Can the Glasgow Coma Scale be applied to AI?
Article originally published on LinkedIn in January 2024 (in French). First chapter of a series on consciousness and AI.
The emergence of consciousness in artificial intelligence sparks intense debates. Consciousness — a subjective state of perception and inner experience — is shared by certain animals (great apes, dolphins, elephants, octopuses, corvids), with no direct correlation to brain size. This observation considerably broadens the question: if consciousness is neither a human monopoly nor a function of brain mass, what are the necessary and sufficient conditions for its emergence?
The debate opposes those who hold consciousness to be intrinsically biological (experience of the external world and alterity as prerequisites) against those who consider it an emergent property that could manifest in sufficiently complex artificial systems.
The Glasgow Coma Scale (GCS) is a neurological assessment system used in emergency medicine to evaluate the level of consciousness following trauma. It rests on three criteria: eye opening (1-4), verbal response (1-5) and motor response (1-6). A score of 15 indicates full consciousness; a score of 3 corresponds to deep coma.
This approach evaluates the complete signal processing chain: stimulus detection, centripetal information transport, integration and processing, then adapted response. Consciousness is operationally defined as the ability to produce a response at a predetermined level of integration to a given stimulus — from reflex functions (nociception) to the most elaborate cognitive functions (oriented language).
Locked-in syndrome — a major brainstem lesion leaving the patient totally paralyzed but cognitively intact — demonstrates that consciousness and sensorimotor capability are dissociable. Jean-Dominique Bauby dictated The Diving Bell and the Butterfly letter by letter, by blinking his left eyelid.
The thought experiment of a locked-in syndrome ab initio (from birth) leads by analogy to the conclusion that an AI algorithm lacking sensory appendages (IoT) could never access any form of consciousness, for want of a feedback loop with an external environment. But even equipped with such sensors, these would not participate in progressive neuro-cerebral development.
The Glasgow Scale is inoperative for current AI: isolated algorithms have no eyes, no motor response, no degraded levels of consciousness. Abstraction frameworks between neural architectures and hardware resources (GPU, CPU, RAM) do not allow for degrees of consciousness. An LLM running on a supercomputer or on a PC will be slower on the latter, but not “less capable” — unlike a patient whose Glasgow score drops from 15 to 12.
The concept could only apply in an integrated cybernetic vision: an organism equipped with IoT sensors, a central intelligence and mechanical effectors. But even such a system (smart thermostat, digital factory, Boston Dynamics robot) would have no need to be “conscious” in order to function — adaptive reactivity does not imply consciousness.
The question of artificial consciousness will shape future legislation: liability in autonomous vehicle accidents, delegation of lethal decisions to military drones, abolition of discernment (Article 122-1 of the French Penal Code). Could abolition of discernment be invoked for an AI? Under the effect of “psychotropic” code or through remote hacking? These questions demand a rigorous definition of what it means to “be conscious” before legislating.
The clinical definition of consciousness rests on evaluating an integrative response to a stimulus — it presupposes an architecture combining a processing unit, sensory receptors and motor effectors. An isolated algorithm does not satisfy this minimal condition. Degrees of consciousness could only arise in distributed and massively parallel architectures, combining thousands of specialized processing structures. It is therefore necessary to confront other definitions — neurobiological, philosophical — to evaluate the alleged consciousness of AI systems (Chapter 2 to follow).