WHAT ARE THE PARALLELISMS BETWEEN CHATGPT AND HUMAN INTELLIGENCE?

Management
rMIX: Il Portale del Riciclo nell'Economia Circolare - What Are the Parallelisms between ChatGPT and Human Intelligence?
Summary

- Evolution of AI Technology: ChatGPT and Its Capabilities

- ChatGPT: An Advanced and Accessible Predictive Model

- Parallels between Human Brain Functions and Artificial Neural Networks

- Risks and Bias in Computer Machine Learning

- Future implications of the rapid development of artificial intelligence


Computers were considered efficient but stupid work machines, now maybe we we have to rethink


Predictive learning systems weren't born recently , indeed they have been under study for decades and applied in many forms of electronic devices.

But the ChatGPT education system and the amount of data available that it can use and interact with each other, lead it to approach the human brain through some similarities of learning information and how it is used to perform some tasks.

Certainly ChatGPT does not have a soul, a character, does not dream, does not love, but it simulates human behavior based on what it has learned through the network, for better or for worse.


What is chatGPT and how does it approach the human brain?

As Corrie Picul tells us, ChatGPT is a new technology developed by OpenAI, so extraordinarily adept at imitating the human communication that will soon conquer the world and all the jobs in it. Or at least that's what the headlines would lead the world to believe.

In a conversation organized by the Carney Institute for Brain Science at Brown University, two Brown scholars from different fields of study discussed the parallels between artificial intelligence and human intelligence.

ChatGPT's neuroscience discussion gave attendees a peek behind the hood of today's machine learning model.

Ellie Pavlick is an assistant professor of computer science and researcher at Google AI who studies how language works and how to make computers understand the language as humans do.

Thomas Serre is a professor of cognitive science, linguistics, psychology and computer science who studies the neural computations supporting visual perception, focusing on the intersection of biological vision and computer vision.

Joining them as moderators were Carney Institute Director and Associate Director Diane Lipscombe and Christopher Moore respectively.

Pavlick and Serre offered complementary explanations how ChatGPT works in relation to the human brain and what reveals what the technology can and cannot Do. Despite all the chatter about new technology, the model isn't that complicated and it's not even new, Pavlick said.

At its most basic level, he explained, ChatGPT is a machine learning model designed to predict the next word in a sentence, the next word, and so on.

This type of predictive learning model has been around for decades, said Pavlick, who specializes in natural language processing.

Computer scientists have long sought to build models that exhibit this behavior and can speak to humans in natural language. To do this, a model needs access to a database of traditional computing components that allow it to "reason" overly complex ideas.

The novelty is how ChatGPT is trained or developed. It has access to unfathomably large amounts of data, as Pavlick put it, "all the judgments on the Internet."

"ChatGPT, per se, is not the game changer," Pavlick said. “The tipping point was that over the last five years there's been this uptick in building models that are basically the same, but they've gotten bigger. And what's happening is that as they get bigger and bigger, they perform better."

Also new is the way ChatGPT and its competitors are available for free public use.

To interact with a system like ChatGPT even a year ago, Pavlick said, a person would need access to a system like Brown's Compute Grid, a specialized tool available to students, faculty, and staff only with certain permissions, and would also require a fair amount of technology experience.

But now anyone, with any technological ability, can play with ChatGPT's sleek and streamlined interface.


Does ChatGPT really think like a human?

Pavlick said that the result of training a computer system with such a large data set, gives the impression of be able to generate articles, stories, poems, dialogues, plays and more in a very realistic way. It can generate fake news reports, fake scientific breakthroughs, and produce all sorts of surprisingly effective results.

The effectiveness of their results has led many people to believe that machine learning models have the ability to think like humans. But do they?

ChatGPT is a type of artificial neural network, explained Serre, whose background is in neuroscience, computer science and engineering. This means that the hardware and programming are based on an interconnected group of nodes inspired by a simplification of neurons in a brain.

Serre said that there are indeed a number of fascinating similarities in how the computer brain and the human brain learn new informationi and use them to perform tasks.

"There is work beginning to suggest that, at least superficially, there may be some connections between the kinds of word and phrase representations that algorithms like ChatGPT use and exploit to process linguistic information, versus what the brain appears to be doing," Serre said.

For example, he said, the backbone of ChatGPT is a type of cutting-edge artificial neural network called a transformation network.

These networks, born from the study of natural language processing, have recently come to dominate the entire field of artificial intelligence. Transformation networks have a particular mechanism that computer scientists call "self-attention," which is related to the attentional mechanisms known to take place in the human brain.

Another similarity to the human brain is a key aspect of what has allowed the technology to become so advanced, Serre said.

In the past, he explained, training a computer's artificial neural networks, to learn and use language or perform the image recognition, required scientists to perform tedious and time-consuming manual tasks, such as creating databases and labeling object categories.

Modern large language models, such as those used in ChatGPT, are trained without the need for this explicit human oversight. And this appears to be related to what Serre has called an influential theory of the brain known as predictive coding theory.

This is the assumption that when a human hears someone speak, the brain is constantly making predictions and developing expectations about what will be said next.

Although the theory was postulated decades ago, Serre said it hasn't been fully tested in neuroscience. However, he is currently leading a lot of experimental work.

"I would say that the level of attention mechanisms to the central engine of these networks that are constantly making predictions about what will be said, which appears to be, at a very crass, consistent with ideas related to neuroscience,” Serre said at the event.

There has been recent research relating the strategies used by large speech patterns to actual brain processes , noted, "There's still a lot we need to figure out, but there's a growing body of research in the neurosciences that suggests that what these models do [in computers], isn't entirely disconnected from the kinds of things the our brain does when we process natural language.

There may also be dangers, indeed, in the same way that the human learning process is susceptible to bias or corruption, so are AI models. These systems learn by statistical association, Serre said. Whatever is dominant in the data set will take over and push other information out.

"This is an area of great concern for artificial intelligence", said Serre. He cited in support of this contention, how the depiction of Caucasian men on the Internet has biased some facial recognition systems to the point where they have failed to recognize faces that do not appear to be white or male.

The latest iteration of ChatCPT, Pavlick said, includes layers of reinforcement learning that act as a divider and help prevent the production of harmful or hateful content. But these are still a work in progress.

"Part of the challenge is that… you can't give a rule to the model, you can't just say 'never generate this and that'," he said Pavlick. “He learns by example, and then you give him lots of examples of things and you say, 'Don't do stuff like that. Do things like this.' And therefore it will always be possible to find some little trick to make him do the bad thing.


Automatic translation. We apologize for any inaccuracies. Original article in Italian.


Source: Brown University


Sign up for free to rNEWS to read the complete article
If you are already a subscriber read the article

CONTACT US

Copyright © 2024 - Privacy Policy - Cookie Policy | Tailor made by plastica riciclata da post consumoeWeb

plastica riciclata da post consumo