A recent study has revealed that AI, much like humans, can demonstrate biases relating to race or gender.
The study, published in Frontiers in Digital Health, was conducted by University of Colorado scientist Theodora Chaspari.
The study suggests that certain artificial intelligence (AI) tools used in healthcare may require more understanding of how people of different genders and races communicate.
The study, "Deconstructing demographic bias in speech-based machine learning models for digital health," highlights the fact that individuals may speak differently based on their gender or race, such as women generally speaking at a higher pitch than men and variations in speech between different racial groups.
The researchers, Michael Yang, Abd-Allah El-Attar, and Theodora Chaspari, discovered that these natural variations in speech could potentially confuse algorithms used to screen individuals for mental health issues like anxiety or depression.
This adds to the growing body of research indicating that AI, like humans, can exhibit biases based on race or gender.
The study emphasizes the importance of training AI systems with diverse and representative data to avoid propagating these biases. The findings were published by Chaspari and her colleagues on July 24 in the journal "Frontiers in Digital Health."