CTO News Hubb
Advertisement
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
CTO News Hubb
No Result
View All Result
Home AI

New cyber software can verify how much knowledge AI really knows — ScienceDaily

April 7, 2023
in AI


With a growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation’s digital database.

Surrey’s verification software can be used as part of a company’s online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.

The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault.

Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:

“In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.

“Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organisations the confidence to safely unleash the power of AI into secure settings.”

The study about Surrey’s software won the best paper award at the 25th International Symposium on Formal Methods.

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:

“Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”

Further information: https://openresearch.surrey.ac.uk/esploro/outputs/99723165702346



Source link

Tags: Telecommunications; Engineering; Robotics Research; Software; Computers and Internet; Internet; Privacy Issues; Scientific Conduct; Surveillance
Previous Post

Robotic flies to swarm 24/7 in RoboHouse

Next Post

Google Sets an Expiry Date for Nest Secure and Dropcam – Review Geek

Next Post

Google Sets an Expiry Date for Nest Secure and Dropcam – Review Geek

Just Make an 'Andor' Movie, You Cowards!

Trending News

A list of resources, articles, and opinion pieces relating to large language models & robotics

April 19, 2023

Are your hiring practices restricting the attraction of female tech talent?

March 8, 2023

Branch prediction

December 31, 2022

© CTO News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • CTO News
  • IT
  • Technology
  • AI
  • QC
  • Robotics
  • Blockchain
  • Contact

Newsletter Sign Up

No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact

© 2021 JNews – Premium WordPress news & magazine theme by Jegtheme.

SUBSCRIBE TO OUR WEEKLY NEWSLETTERS