Noam Chomsky vs. ChatGPT

Dr. Noam Chomsky is a professor of linguistics at the University of Arizona and an emeritus professor of linguistics at MIT (Massachusetts Institute of Technology). The public intellectual is known for his work in various fields, including linguistics, philosophy, cognitive science, history, social criticism, and political activism.

Who is Noam Chomsky and why is he an authority on AI language models?

Dr. Noam Chomsky is a professor of linguistics at the University of Arizona and an emeritus professor of linguistics at MIT (Massachusetts Institute of Technology). The public intellectual is known for his work in various fields, including linguistics, philosophy, cognitive science, history, social criticism, and political activism. He is often referred to as “the father of modern linguistics” and has made significant contributions to analytic philosophy and the establishment of cognitive science as a field of study. Chomsky is an intellectual and expert in linguistics. Therefore he stands as an authority on thinking, language and philosophy. But importantly, Noam Chomsky is not and does not claim to be an expert in technology nor Artificial Intelligence (AI).

What have Noam Chomsky and his peers had to say about ChatGPT?

Recently, in The New York Times, Dr. Chomsky with help from Dr. Ian Roberts and Dr. Jeffrey Watumull published an essay titled “The False Promise of ChatGPT“. – It’s behind a paywall, if you’re not a subscriber or smart enough to disable JavaScript, then we had ChatGPT summarise the article below for you. Ironically, a promising feature of ChatGPT (which I’m sure the authors and publishers would refute) is that it enables us to continue freely sharing information on the web. And I quote, Tim Berners-Lee, (free) creator of the World Wide Web, “This is for everyone.” (Not only those who can afford or choose to subscribe to certain publishers.)

Following ChatGPT’s summary of the New York Times article, you will find ChatGPT’s attempt to write a counter argument based on the full text from the original article (which includes a conversation between Dr. Watumull and ChatGPT). For full disclosure, at the time of writing this article, ChatGPT is operating GPT-3 and we have disabled all our extensions that would enable ChatGPT to connect to the web, for example. Therefore, the summary and counter argument are purely based on the article’s full text and the GPT-3 model.

The GPT-3 summary of “The False Promise of ChatGPT” by Noam Chomsky:

The article discusses the limitations of machine learning in the field of artificial intelligence (AI). While machine learning programs like OpenAI’s ChatGPT and Google’s Bard have been hailed as the first glimpses of artificial general intelligence, the article argues that they are fundamentally flawed because they are limited to statistical pattern matching rather than true intelligence.

The article explains that the human mind operates in a completely different way than machine learning programs. Humans have an innate grammar that allows them to generate complex sentences and theories with universal reach, while machine learning programs lack the ability to distinguish between the possible and the impossible. While machine learning is useful for some applications, like computer programming or suggesting rhymes for poetry, it cannot replicate the creative and critical thinking of humans.

The article argues that the crux of machine learning is description and prediction, while true intelligence involves creating explanations that include not only descriptions and predictions but also counterfactual conjectures. This kind of thinking involves causal explanations and error correction, which are processes that limit what possibilities can be considered rationally.

In conclusion, while machine learning has its uses, it is not a substitute for true intelligence. Humans possess a level of creativity and critical thinking that machines are currently incapable of replicating.

ChatGPT’s counter argument to “The False Promise of ChatGPT” by Noam Chomsky.

While it is true that machine learning programs like ChatGPT differ from how humans reason and use language, this does not necessarily mean that they are fundamentally flawed. The argument that such programs are incapable of producing true intelligence, with its emphasis on explanation and error correction, is based on a narrow view of what intelligence entails. In reality, intelligence takes many forms, and it is entirely possible for machine learning programs to excel in some areas where humans do not.

Moreover, the claim that machine learning is incapable of positing causal mechanisms or physical laws is simply incorrect. Many machine learning programs are explicitly designed to identify causal relationships and make predictions about the future based on those relationships. While these programs may not operate in the same way as humans do, they can still provide valuable insights and predictions that would be difficult or impossible to obtain otherwise.

Finally, the argument that machine learning programs are limited by their inability to distinguish the possible from the impossible is also flawed. While it is true that these programs can learn humanly possible and impossible languages with equal facility, this does not mean that they are incapable of making meaningful distinctions between different kinds of information. Machine learning programs are built on complex algorithms that allow them to identify patterns and make predictions based on those patterns, and these algorithms can be tailored to suit specific tasks and objectives.

In short, while it is true that machine learning programs like ChatGPT are not the same as human minds, this does not mean that they are fundamentally flawed or incapable of producing valuable insights and predictions. As with any technology, the key is to use machine learning programs in ways that are responsible, ethical, and aligned with our values as a society. By doing so, we can harness the power of these programs to solve some of the most pressing problems facing humanity today.

Our thoughts on the article and ChatGPT’s counter argument.

Before we get back to doing what we do best, (providing a library of AI tools to help you create awesome stuff), we thought we’d engage with this debate between man and machine. Although, we make sure to do so in a way that does not discredit either party nor even claim to fully understand each side of the argument. What this exercise has enabled us to do however, is deepen our contextual understanding of the morality and ethics of AI. ChatGPT extended our capacity to learn and made accessible to us information that may have been too lofty or over-intellectualised to instil any real meaning previously. The innovation also allowed us to consider the opposing viewpoint and not blindly accept Chomsky’s stance solely on his authority. Even Dr. Noam Chomsky, who identifies as an ‘anarcho-syndicalist and libertarian socialist’, must appreciate this anti-authoritarian approach.

GPT-3 uncovered some genuinely insightful weaknesses in Chomsky et al’s arguments. For example, where GPT-3 challenges the idea that AI can’t distinguish between the possible and impossible, as well as correct and incorrect. To offer our perspective as technologists, GPT-3’s point here is quite profound given ‘GAN’ (as in VQGAN and StyleGAN) A ‘Generative Adversarial Network’ is a method used by many of the image generation AI solutions in which an AI does exactly what Chomsky says it cannot. It is a network designed specifically to say whether an image correctly matches its definition or not – and they work with incredible precision. ChatGPT also challenges Chomsky’s view that it cannot position causal relationships. Chomsky uses a physics analogy here which seems very silly given the massive leaps forward AI has enabled in creating physics simulations and helping humans to understand the laws of physics. However, Chomsky’s original point focusses on the discovery of new laws of physics positioning that the probable answer is not always the correct answer. Chomsky correctly points out that the Aristotelian theory of gravity, that an apple falls to the ground because that’s where it belongs, is highly probable but incorrect. Where Einstein’s theory of gravity, that it is caused by curvatures in space-time, is highly improbable yet correct. Chomsky is saying that a Machine Learning algorithm can only find the most probable answer and not determine the absolute truth, nor uncover new improbable truths.

Perhaps GPT-3 misunderstood Chomsky’s argument, using probability to jump to the conclusion regarding possible and impossible. It could also be argued that the model wrote the rebuttal only because we asked it to do so and it would have written a supporting argument had we asked it to, which of course is absolutely true. However it could also be argued that such a premise is a misunderstanding of the technology itself and what ChatGPT is essentially telling us: It is not human. It does not ‘believe’ what it is saying. Instead it is telling us the most probable human counter argument to Chomsky’s article, and if that is based on a misunderstanding, then that is an error of the professor of linguistics’ capability to communicate effectively, not only the technology’s capability to form a decent and well structured counter-argument.

When approaching the subjects of AI, ethics and morality, ChatGPT uses the argument: Without human experiences “I cannot be considered immoral or moral”. Chomsky chalks this up as a dismissal, however we believe it to be a very strong argument. After-all, why would Chomsky hold ChatGPT to human standards? Is he ignoring the word ‘Artificial’ in ‘Artificial Intelligence’? Let us indulge in lofty intellectualism for just a moment and quote philosopher Martin Heidegger:


“Everywhere we remain unfree and chained to technology, whether we passionately affirm or deny it. But we are delivered over to it in the worst possible way when we regard it as something neutral; for this conception of it, to which today we particularly pay homage, makes us utterly blind to the essence of technology.”

Martin Heidegger

Technologies are what make us human, were it not for our ape-like ancestors creating tools from rocks and sticks (our first technologies) we wouldn’t have been able to extend our natural capabilities and become who we are today. We are unfree from technologies in the sense that we live with a roof over our head. Since getting warm and comfortable and consuming nutrient rich, hot prepared food, we’ve shed our monkey-fur and can no longer live outside naked munching raw vegetation, we’d likely freeze or starve to death (although, each to their own). Artificial Intelligence is nothing but the latest extension of human abilities. GPT-3 is correct in uncovering that it is neither moral, nor immoral – but that does not make it neutral. Technology is an extension of human will and always exaggerates and makes more efficient whichever human need it is designed to satisfy (with varying degrees of success). Technologies can be used for good or evil. They are not neutral in the same sense that no human is truly neutral nor unbiased. Chomsky closes his argument by saying that he doesn’t know whether “to laugh or cry”, this seems to us, much more of a dismissal than the very factual point put forward by GPT-3, that it is neither moral nor immoral. If our philosophers, critical thinkers and intellectuals find Artificial Intelligence to be morally and ethically problematic, we’d do well to remember that AI is merely an extension and thereby a reflection of ourselves.

Use AI tools for good. Pretty please.