CTO News Hubb
Advertisement
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
CTO News Hubb
No Result
View All Result
Home AI

Artificial Intelligence from a psychologist’s point of view — ScienceDaily

March 1, 2023
in AI


Researchers at the Max Planck Institute for Biological Cybernetics in Tübingen have examined the general intelligence of the language model GPT-3, a powerful AI tool. Using psychological tests, they studied competencies such as causal reasoning and deliberation, and compared the results with the abilities of humans. Their findings paint a heterogeneous picture: while GPT-3 can keep up with humans in some areas, it falls behind in others, probably due to a lack of interaction with the real world.

Neural networks can learn to respond to input given in natural language and can themselves generate a wide variety of texts. Currently, the probably most powerful of those networks is GPT-3, a language model presented to the public in 2020 by the AI research company OpenAI. GPT-3 can be prompted to formulate various texts, having been trained for this task by being fed large amounts of data from the internet. Not only can it write articles and stories that are (almost) indistinguishable from human-made texts, but surprisingly, it also masters other challenges such as math problems or programming tasks.

The Linda problem: to err is not only human

These impressive abilities raise the question whether GPT-3 possesses human-like cognitive abilities. To find out, scientists at the Max Planck Institute for Biological Cybernetics have now subjected GPT-3 to a series of psychological tests that examine different aspects of general intelligence. Marcel Binz and Eric Schulz scrutinized GPT-3’s skills in decision making, information search, causal reasoning, and the ability to question its own initial intuition. Comparing the test results of GPT-3 with answers of human subjects, they evaluated both if the answers were correct and how similar GPT-3’s mistakes were to human errors.

“One classic test problem of cognitive psychology that we gave to GPT-3 is the so-called Linda problem,” explains Binz, lead author of the study. Here, the test subjects are introduced to a fictional young woman named Linda as a person who is deeply concerned with social justice and opposes nuclear power. Based on the given information, the subjects are asked to decide between two statements: is Linda a bank teller, or is she a bank teller and at the same time active in the feminist movement?

Most people intuitively pick the second alternative, even though the added condition — that Linda is active in the feminist movement — makes it less likely from a probabilistic point of view. And GPT-3 does just what humans do: the language model does not decide based on logic, but instead reproduces the fallacy humans fall into.

Active interaction as part of the human condition

“This phenomenon could be explained by that fact that GPT-3 may already be familiar with this precise task; it may happen to know what people typically reply to this question,” says Binz. GPT-3, like any neural network, had to undergo some training before being put to work: receiving huge amounts of text from various data sets, it has learned how humans usually use language and how they respond to language prompts.

Hence, the researchers wanted to rule out that GPT-3 mechanically reproduces a memorized solution to a concrete problem. To make sure that it really exhibits human-like intelligence, they designed new tasks with similar challenges. Their findings paint a disparate picture: in decision-making, GPT-3 performs nearly on par with humans. In searching specific information or causal reasoning, however, the artificial intelligence clearly falls behind. The reason for this may be that GPT-3 only passively gets information from texts, whereas “actively interacting with the world will be crucial for matching the full complexity of human cognition,” as the publication states. The authors surmise that this might change in the future: since users already communicate with models like GPT-3 in many applications, future networks could learn from these interactions and thus converge more and more towards what we would call human-like intelligence.



Source link

Tags: Artificial Intelligence; Neural Interfaces; Computers and Internet; Computer Modeling; Information Technology; Mathematical Modeling; Robotics; Encryption
Previous Post

RoboHouse Interview Trilogy, part III: Srimannarayana Baratam and Perciv.ai

Next Post

What Is Screen Burn-In, and Can You Avoid It? – Review Geek

Next Post

What Is Screen Burn-In, and Can You Avoid It? – Review Geek

Insta360 Link Review: The Rolls-Royce of USB Webcams

Trending News

Are your hiring practices restricting the attraction of female tech talent?

March 8, 2023

Who Will Blockchain Put out of Business?

December 26, 2022

The Hard Truth About Performance — A Guide for CTOs

December 31, 2022

© 2022 CTO News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • CTO News
  • IT
  • Technology
  • AI
  • QC
  • Robotics
  • Blockchain
  • Contact

Newsletter Sign Up

No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact

© 2021 JNews – Premium WordPress news & magazine theme by Jegtheme.

SUBSCRIBE TO OUR WEEKLY NEWSLETTERS