CTO News Hubb
Advertisement
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact
No Result
View All Result
CTO News Hubb
No Result
View All Result
Home AI

Researchers use functional near-infrared spectroscopy to monitor participant responses — ScienceDaily

March 19, 2023
in AI


The future of work is here.

As industries begin to see humans working closely with robots, there’s a need to ensure that the relationship is effective, smooth and beneficial to humans. Robot trustworthiness and humans’ willingness to trust robot behavior are vital to this working relationship. However, capturing human trust levels can be difficult due to subjectivity, a challenge researchers in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering at Texas A&M University aim to solve.

Dr. Ranjana Mehta, associate professor and director of the NeuroErgonomics Lab, said her lab’s human-autonomy trust research stemmed from a series of projects on human-robot Interactions in safety-critical work domains funded by the National Science Foundation (NSF).

“While our focus so far was to understand how operator states of fatigue and stress impact how humans interact with robots, trust became an important construct to study,” Mehta said. “We found that as humans get tired, they let their guards down and become more trusting of automation than they should. However, why that is the case becomes an important question to address.”

Mehta’s latest NSF-funded work, recently published in Human Factors: The Journal of the Human Factors and Ergonomics Society, focuses on understanding the brain-behavior relationships of why and how an operator’s trusting behaviors are influenced by both human and robot factors.

Mehta also has another publication in the journal Applied Ergonomics that investigates these human and robot factors.

Using functional near-infrared spectroscopy, Mehta’s lab captured functional brain activity as operators collaborated with robots on a manufacturing task. They found faulty robot actions decreased the operator’s trust in the robots. That distrust was associated with increased activation of regions in the frontal, motor and visual cortices, indicating increasing workload and heightened situational awareness. Interestingly, the same distrusting behavior was associated with the decoupling of these brain regions working together, which otherwise were well connected when the robot behaved reliably. Mehta said this decoupling was greater at higher robot autonomy levels, indicating that neural signatures of trust are influenced by the dynamics of human-autonomy teaming.

“What we found most interesting was that the neural signatures differed when we compared brain activation data across reliability conditions (manipulated using normal and faulty robot behavior) versus operator’s trust levels (collected via surveys) in the robot,” Mehta said. “This emphasized the importance of understanding and measuring brain-behavior relationships of trust in human-robot collaborations since perceptions of trust alone is not indicative of how operators’ trusting behaviors shape up.”

Dr. Sarah Hopko ’19, lead author on both papers and recent industrial engineering doctoral student, said neural responses and perceptions of trust are both symptoms of trusting and distrusting behaviors and relay distinct information on how trust builds, breaches and repairs with different robot behaviors. She emphasized the strengths of multimodal trust metrics — neural activity, eye tracking, behavioral analysis, etc. — can reveal new perspectives that subjective responses alone cannot offer.

The next step is to expand the research into a different work context, such as emergency response, and understand how trust in multi-human robot teams impact teamwork and taskwork in safety-critical environments. Mehta said the long-term goal is not to replace humans with autonomous robots but to support them by developing trust-aware autonomy agents.

“This work is critical, and we are motivated to ensure that humans-in-the-loop robotics design, evaluation and integration into the workplace are supportive and empowering of human capabilities,” Mehta said.



Source link

Tags: Brain-Computer Interfaces; Social Psychology; Psychology; Robotics Research; Engineering; Vehicles; Robotics; Neural Interfaces; Artificial Intelligence
Previous Post

What are the Job Roles And Career Opportunities In IoT?

Next Post

Another Password Manager Announces Support for Passkeys – Review Geek

Next Post

Another Password Manager Announces Support for Passkeys – Review Geek

8 Best Phones With a 3.5-mm Headphone Jack (2023): Rugged, Cheap, Luxe

Trending News

Are your hiring practices restricting the attraction of female tech talent?

March 8, 2023

Who Will Blockchain Put out of Business?

December 26, 2022

The Hard Truth About Performance — A Guide for CTOs

December 31, 2022

© 2022 CTO News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • CTO News
  • IT
  • Technology
  • AI
  • QC
  • Robotics
  • Blockchain
  • Contact

Newsletter Sign Up

No Result
View All Result
  • Home
  • CTO News
  • IT
  • Technology
  • Tech Topics
    • AI
    • QC
    • Robotics
    • Blockchain
  • Contact

© 2021 JNews – Premium WordPress news & magazine theme by Jegtheme.

SUBSCRIBE TO OUR WEEKLY NEWSLETTERS