Jane K. Student




The Luminous Room: A Criticism of Searle’s Chinese Room

Prof. Steven Alford

Month 12, 20xx






I certify that I am the author of this paper and that any assistance received in its preparation is fully acknowledged and disclosed in the paper.  I have also cited any sources from which I used data, ideas, or words, either quoted directly or paraphrased.  I also certify that I prepared this paper specifically for this section of this course.





Jane K. Student


Jane K. Student

Prof. Steven Alford

HONR 2000f

Month 12, 20xx

The Luminous Room: A Criticism of Searle’s Chinese Room

According to Paul and Patricia Churchland, the goal of the classical artificial intelligence effort has been to construct a computer program governed by a set of rules that can effectively and recursively mimic patterns of human thought (32).  It follows that a machine would be able to think to the extent that the program it executes is able to simulate the human thought process.  If a program allowed the machine to succeed, the computer might be said to have achieved conscious intelligence – thinking.  John Searle’s Chinese room thought experiment (1980) was constructed to demonstrate that even if a SM (symbol-manipulating) machine had the right program to pass the Turing test (in Chinese), the machine would still fail to exhibit semantic understanding of the Chinese language: the SM machine could not achieve conscious intelligence (34).  

Searle received a number of challenges to his Chinese room thought experiment.  Perhaps one of the most notable criticisms came from two San Diego professors of philosophy: Paul and Patricia Churchland.  The Churchlands found quarrel with the third axiom Searle proposed in his Chinese room argument (see Appendix): “Syntax by itself is neither constitutive of nor sufficient for semantics” (qtd. in Churchland and Churchland 34).  Clearly, this was the crux of Searle’s argument and the Churchlands proposed their own example to counter it: the Luminous room thought experiment. 

The Churchlands framed the Luminous room argument with three Searle-like axioms and a conclusion (see Appendix).  The thought experiment asks one to consider the following argument as an early objection to James Clerk Maxwell’s 1864 theory that the oscillation of electric and magnetic forces produced light:

Consider a dark room containing a man holding a bar magnet or charged object.  If the man pumps the magnet up and down, then, according to Maxwell’s theory of artificial luminance (AL), it will initiate a spreading circle of electromagnetic waves and will thus be luminous.  But as all of us who have toyed with magnets or charged balls well know, their forces (or any other forces for that matter), even when set in motion, produce no luminance at all.  It is inconceivable that you might constitute real luminance just by moving forces around (Churchland and Churchland 35). 


            The Churchlands forward several responses Maxwell might pose to counter this outwardly damaging challenge to his theory.  First, although it is “intuitively plausible” (Churchland and Churchland 35) that forces alone can not be luminous, it is not certain (as the third axiom implies).  Second, the example fails to purport anything remarkable concerning the nature of light.  Finally, the Churchlands suggest that what is truly necessary is additional research to pin down the interrelated properties of electromagnetic waves and light.  Clearly, the lesson the Churchlands intended one to draw from the Luminous room experiment is that inconceivability is not impossibility: “Plainly, what people can or cannot imagine often has nothing to do with what is or is not the case, even where the people involved are highly intelligent” (35). 

            The Churchlands conclude that it was by equating the inconceivable with the impossible that Searle erred in his Chinese room argument.  Certainly, the Chinese room is “semantically dark” (Churchland and Churchland 35), but the supposed or perceived strength of that darkness is not an appropriate measure for determining whether (or not) a symbol-manipulating machine could also be a thinking machine. 

Nevertheless, Searle and the Churchlands are similarly skeptical that a computer could ever think – at least as far as classical AI is concerned.  The Churchlands purport that by pursuing different structures (specifically those modeled on a more brain-like, parallel organization) a machine may one day constitute conscious intelligence: “Artificial intelligence, in a nonbiological but massively parallel machine, remains a compelling and discernible prospect” (37). 







Axiom 1. Computer programs are formal (syntactic).

Axiom 1. Electricity and magnetism are forces.

Axiom 2. Human minds have mental contents (semantics).

Axiom 2. The essential property of light is luminance.

Axiom 3. Syntax by itself is neither constitutive of nor sufficient for semantics.

Axiom 3. Forces by themselves are neither constitutive of nor sufficient for luminance.

Conclusion 1. Programs are neither constitutive of nor sufficient for minds.

Conclusion 1. Electricity and magnetism are neither constitutive of nor sufficient for light.


Figure 1.  The Churchlands constructed the Luminous room so that the argument would emphasize the problematic nature of Searle’s third axiom.  The figure above (taken from Churchland and Churchland 33) juxtaposes the two arguments. 


Works Cited

Churchland, Paul M. and Patricia Smith Churchland. “Could a Machine Think?” Scientific American Jan. 1990: 32-27.