CTO News Hubb
Advertisement
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
CTO News Hubb
No Result
View All Result
Home AI

An ‘introspective’ AI finds diversity improves performance

September 1, 2023
in AI


An artificial intelligence with the ability to look inward and fine tune its own neural network performs better when it chooses diversity over lack of diversity, a new study finds. The resulting diverse neural networks were particularly effective at solving complex tasks.

“We created a test system with a non-human intelligence, an artificial intelligence (AI), to see if the AI would choose diversity over the lack of diversity and if its choice would improve the performance of the AI,” says William Ditto, professor of physics at North Carolina State University, director of NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) and co-corresponding author of the work. “The key was giving the AI the ability to look inward and learn how it learns.”

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks create similarly strong connections by adjusting numerical weights and biases during training sessions. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

Conventional AI uses neural networks to solve problems, but these networks are typically composed of large numbers of identical artificial neurons. The number and strength of connections between those identical neurons may change as it learns, but once the network is optimized, those static neurons are the network.

Ditto’s team, on the other hand, gave its AI the ability to choose the number, shape and connection strength between neurons in its neural network, creating sub-networks of different neuron types and connection strengths within the network as it learns.

“Our real brains have more than one type of neuron,” Ditto says. “So we gave our AI the ability to look inward and decide whether it needed to modify the composition of its neural network. Essentially, we gave it the control knob for its own brain. So it can solve the problem, look at the result, and change the type and mixture of artificial neurons until it finds the most advantageous one. It’s meta-learning for AI.

“Our AI could also decide between diverse or homogenous neurons,” Ditto says. “And we found that in every instance the AI chose diversity as a way to strengthen its performance.”

The team tested the AI’s accuracy by asking it to perform a standard numerical classifying exercise, and saw that its accuracy increased as the number of neurons and neuronal diversity increased. A standard, homogenous AI could identify the numbers with 57% accuracy, while the meta-learning, diverse AI was able to reach 70% accuracy.

According to Ditto, the diversity-based AI is up to 10 times more accurate than conventional AI in solving more complicated problems, such as predicting a pendulum’s swing or the motion of galaxies.

“We have shown that if you give an AI the ability to look inward and learn how it learns it will change its internal structure — the structure of its artificial neurons — to embrace diversity and improve its ability to learn and solve problems efficiently and more accurately,” Ditto says. “Indeed, we also observed that as the problems become more complex and chaotic the performance improves even more dramatically over an AI that does not embrace diversity.”

The research appears in Scientific Reports, and was supported by the Office of Naval Research (under grant N00014-16-1-3066) and by United Therapeutics. John Lindner, emeritus professor of physics at the College of Wooster and visiting professor at NAIL, is co-corresponding author. Former NC State graduate student Anshul Choudhary is first author. NC State graduate student Anil Radhakrishnan and Sudeshna Sinha, professor of physics at the Indian Institute of Science Education and Research Mohali, also contributed to the work.



Source link

Tags: Engineering; Telecommunications; Robotics Research; Sports Science; Artificial Intelligence; Neural Interfaces; Math Puzzles; Computers and Internet
Previous Post

Autonomous innovations in an uncertain world | MIT News

Next Post

Apple’s Decision to Kill Its CSAM Photo-Scanning Tool Sparks Fresh Controversy

Next Post

Apple's Decision to Kill Its CSAM Photo-Scanning Tool Sparks Fresh Controversy

Cylons and the Cloud Connectivity Cybersecurity Conundrum

Trending News

Quality of new vehicles in US declining on more tech use, study shows

June 23, 2023

Using unmodified third-party Reddit apps with a custom server · GitHub

June 9, 2023

OPNsense® a true open source security platform and more

June 27, 2023

© CTO News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • CTO News
  • IT
  • Technology
  • AI
  • QC
  • Robotics
  • Blockchain
  • Contact

Newsletter Sign Up

No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact

© 2021 JNews – Premium WordPress news & magazine theme by Jegtheme.

SUBSCRIBE TO OUR WEEKLY NEWSLETTERS