Noam Chomsky on ChatGPT: “Basically High-Tech Plagiarism.”
It may have some value for some things, but it is not obvious what. – Noam Chomsky
In an online conversation, educator and Editor at EduKitchen.nl, Thijmen Sprakel, and Professor of Linguistics, Noam Chomsky, explore the impact of AI-agents (such as ChatGPT) on the education system.
Chomsky maintains a critical stance toward its use within science. For starters, ChatGPT fails to produce critical interpretation of complex phenomena, it won’t advance education and science (but rather repeat what humans already know), and it fits unnervingly well in an according to Chomsky inadequate American education system which relies on what he calls neoliberal teaching-to-the-test practices.
The net benefit of ChatGPT is at best naught, and at worst a threat to critical thinking within education – which is the very foundation of free, democratic society.
Chomsky, in response to an inquiry by Thijmen Sprakel about the impact of ChatGPT on education:
Well, I don’t think ChatGPT has anything to do with education – except undermining it.
ChatGPT is basically high-tech plagiarism. It’s a system that accesses an astronomical amount of data, finds regularities and then strings them together. Things then look more or less like somebody could have written on this topic. Basically plagiarism – it just happens to be high-tech.
Well, plagiarism can be a nuisance. In university departments where essays are required, for years there have been programs which help professors detect plagiarised essays. Now, it will be more difficult – because it’s easier to plagiarise. But that’s about the only contribution to education that I can think of – that it makes it harder to detect plagiarism.
These systems have absolutely no value with regard to understanding anything about language or cognition.
They tell you nothing about that, anymore that plagiarism does. In fact, the more the systems improve, the greater their flaws. And that for a simple reason – these are systems which perform just as well for actual languages as they do for impossible languages.
So it’s as if somebody produced a new version of the periodic table of the elements: This periodic table of the elements then includes all the elements – all the possible elements, and all the impossible elements, and makes no distinction between them. This is of absolutely no value to science or understanding. And that’s basically what these systems are.
They may have some value for some things, but it is not obvious what.
Professor of Linguistics Noam Chomsky and Editor of EduKitchen.nl Thijmen Sprakel in conversation
Too stubborn to talk to…
In an 2023 opinion piece in the New York Times, Chomsky gives a concrete example of what he sees as a lack of criticality in AI-systems such as ChatGPT. It fundamentally boils down to a foundation of human knowledge in the explanatory framework English grammar and its connection to the way humans operate in the world.
Chomsky points out that the human way to approach data relies heavily on creating new explanations for phenomena, and not just extrapolating from hundreds of terabytes of already available data like ChatGPT does.
Chomsky argues that “the human mind seeks not to infer brute correlations among data points but to create explanations”, and finally points out the obvious fundamental difference between machine and man: ChatGPT is reliant on large data collections from which it can extrapolate – a human, on the contrary, can theoretically deduce meaningful conclusions from even small amounts of data, by employing its senses and its mental faculty together in pursuit of meaning.
[…] the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with).
Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.”
The correct explanations of language are complicated and cannot be learned just by marinating in big data. (Chomsky, Noam, nytimes.com)
❧
Sources
Talk: “Chomsky on ChatGPT, Education, Russia and the unvaccinated”, Thijmen Sprakel and Noam Chomsky. 2023. Found here: https://www.youtube.com/watch?v=IgxzcOugvEI&t=2s
Chomsky, Noam; Roberts, Ian; Watumull, Jeffrey. The False Promise of ChatGPT. Opinion in The New York Times. 2023. https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html
Images
Laptop, ChatGPT, licensed via Unsplash
Still, youtube.com