As the artificial intelligence revolution ramps up, one trend is clear: Bias in the training of AI systems is resulting in real-world discriminatory practices. AI recruitment tools have been shown to discriminate against women. ChatGPT has demonstrated racist and discriminatory biases. In every reported case of police misidentifying a suspect because of facial recognition technology, that person has been Black. And now, new research suggests even the pedestrian detection software in self-driving cars may be less effective in detecting people of color — as well as children, generally — as a result of AI bias, putting them at greater safety risk as more carmakers use the technology. A team of researchers in the UK and China tested how well eight popular pedestrian detectors worked depending on a person's race, gender, and age.