About 35 years ago, an American philosopher named John Searle proposed what has come to be known as the Chinese Room Argument. The paper is about artificial intelligence generally, and about the famous Turing Test specifically. (The Turing Test says that if a machine can reliably pass as a human in an online chat, then the machine counts as intelligent.)
Searle thinks the Turing Test doesn’t work, and the Chinese Room Argument is his thought experiment to show why.
The quickie version: Imagine that you’re locked in a room containing lots and lots of books filled with a bunch of markings. A mail slot in one of the walls is your only access to the outside world. On the table in front of you is yet another book filled with some instructions in English. (We’re also going to assume that you speak only English. Americans and Brits do have at least some stereotypes in common.)
Every once in a while, a sheet of paper comes through the mail slot with some markings on it. You then use your instruction book to look up the markings on the paper that came through the slot. The book directs you to go find a different set of markings in one of the books on the shelves, copy those markings onto a new sheet of paper, and then push the results back through the mail slot.
As you’ve probably already deduced by now, the markings are actually Chinese writing, and the things coming in through the mail slot are questions in Chinese. The markings you’re looking up and sending back out are perfectly coherent answers to those questions. To anyone outside the room, you would pass the Turing Test for speaking Chinese.
But you don’t understand a word of Chinese.
Searle concludes that the ability to manipulate language based purely on the shapes of symbols (or syntax) is not sufficient for actual understanding (or semantics).
Or, more briefly, syntax is not semantics.