Linguistics and LLMs: Understanding Language

With the onslaught of LLMs and AI advancement comes the question of do they actually understand?

11th April 2024

4 min read

Kai Cockrell

Kai Cockrell is a student currently studying in London

Noam Chomsky the father of Modern Linguistics

Caption: Noam Chomsky the father of Modern Linguistics



Linguistics and Large Language Models: Understanding Language

When one thinks of large language models (LLM), one of the most talked of advancements in the field of machine learning, it is easy to assume that linguistics and linguistics research must have played a crucial role in their creation and development. Indeed there is an entire field of lingusitics called Computational Linguistics that has in recent times boomed in popularity and is at the forefront of language model research However, the techniques and engeneering used in computational linguistics does not often align with other, more theoretical and traditional, areas of the linguistic landscape.

The Chomsky Debate

The 'grandfather of modern linguistics' Noam Chomsky, has outwardly stated that LLMs are 'not a contribution to science' 'have achieved nothing in the domain of linguistics' and are instead just a useful tool. One of Chomsky's pinnacle works in the field of linguistics is that of the theory of Universal Grammar (UG) which argues that aspects of grammar structures and rules are not only shared between all languages but are innate to the human species. What is being referred to here is not the words, or their arrangement but instead 'deep structure' or universal principles which allows humans, especially as children to learn and understand the complexities of language easily and quickly.

However, the existence of LLMs seems to disprove this idea, as they are able to construct meaningful sentences without the 'innate acquisitional mechanism' of the human brain and instead through neural networks. Chomsky himself rejects the idea of LLMs showing a form of language learning, debating whether they understand meaning and if simply correctly guessing the next statistically probable word can count as language learning. Other scholars, such as Steven Piantadosi instead argue that LLMs do present language learning in ints entirity and rejects Chomsky's idea of biological constraints and human essentialism.

"Understanding" language and meaning

What does it truly mean to 'understand' language? This is a topic that is closely debated in the artificial intelligence (AI) research community and the field of linguistics, and is not as simple as you might think. It was only recently that a large rift has arose in the field of AI research on whether AI can compregend and reason on what they were doing or not, before then it was agreed on and understood that while they appear to be intelligent and perform tasks well they do not understand what they are actually doing. For example, facial recognition software does not actually know what a face is, nor can it comprehend what a face is used for or how humans can interpret meaning from acts such as expression.

However, LLMs are not facial recognisiton software. The neural networks that are used have billions to trillions of trained parameters and are often trained on language copora as large as most of the English internet. Even so they have showed evidence that they do not percieve meaning or show "understanding", at least in a way a human would. There have been heaps of prompts that an LLM will either interpret 'wrong' or would produce somethin nonsensical, there has also been problems with hallucination (an LLM essentially 'lying' such as inventing a reference that does not exist) and a whole host of other issues. But these are slowly being improved with time and scale.

There is also the fact that if LLM does not percieve meaning like a human, does that mean it cannot understand and learn. Some researchings such as Terrence Sejnowski argue that "Some aspects of their behaviour appear to be intelligent, but if not human intelligence, what is the nature of their intelligence?". Human understanding is based on concepts and causality, whereas LLMs "understand" through statistical models, predicitions and generalizations. You could argue that we instead need to redfine what we call "understanding" and "learning.

However, while this new intelligence appears to create productions with fantastic formal linguistic competence it lacks an amount of functionality in traditional human language. That is to say while it on all accounts appears as sophisticated as people speaking it lacks the ability to apply that language in 'real life' scenarios and lacks adaptibility. On the other hand there are some tasks, and probably more in the future, where the LLM form of understanding and intelligence may be more suited, such as tasks that require using extremely large amounts of preexisting knowledge. The failure of LLMs ability to show "human understanding" does not mean there is no intelligence in their process.