Artificial Intelligence shows how language is processed by the brain

Artificial Intelligence

In recent years, artificial intelligence models of language have been exceedingly successful at some tasks. Most notably, they excel in predicting the next word in a string of text; this technology assists search engines and texting applications in predicting the next word you will enter.

The most recent generation of predictive language models purports to understand something about language’s fundamental meaning. These models can not only predict the next word, but also do activities that appear to need some amount of genuine knowledge, such as answering questions, summarising documents, and writing stories.

Such models were created to maximize performance for the specific purpose of text prediction, rather than to attempt to emulate how the human brain does this activity or perceives language. However, new research from MIT neuroscientists reveals that the fundamental function of these models is similar to that of human language-processing regions.

Computer models that do well on other types of language tasks do not exhibit this closeness to the human brain, suggesting that the human brain may employ next-word prediction to drive language processing.

Deep neural networks are a class of models that include the latest, high-performing next-word prediction models. These networks are made up of computational nodes that make connections of varying strength and layers that communicate in certain ways.

Over the last decade, scientists have employed deep neural networks to construct vision models that can distinguish things as well as the monkey brain. MIT research has also revealed that the underlying function of visual object identification models fits the organization of the monkey visual cortex, even though the computer models were not particularly created to replicate the brain.

The MIT researchers used a similar strategy in the newest study to compare language-processing regions in the human brain with language-processing models. The researchers looked at 43 distinct language models, several of which are adjusted for next-word prediction. These include the Generative Pre-trained Transformer 3, a model that, when given a stimulus, can write like that of a human. Other models were developed to do specific language tasks, such as filling in a phrase’s blanks.

The activity of the network’s nodes was monitored while each model was presented with a string of words. They then matched these patterns to human brain activity recorded in people doing three linguistic tasks: listening to stories, reading sentences one at a time, and reading phrases in which one word is exposed at a time. These human datasets contained functional magnetic resonance data and intracranial electrocorticographic readings collected in persons having brain surgery for epilepsy.

They discovered that the best-performing next-word prediction models showed activity patterns that were very similar to those seen in the human brain. Activity in those identical models was also substantially linked with human behavioral measurements such as how quickly people could read the text.

The models that best predict brain reactions also tend to best predict human behavior responses in the form of reading times, we find. The model’s performance on next-word prediction then explains both of them. This triangle is what holds everything together.

The researchers also intend to link these high-performing language models with prior computer models established by Tenenbaum’s team that can handle other types of tasks, such as generating perceptual representations of the physical world.

Leave a Reply

Your email address will not be published. Required fields are marked *