Deep-learning algorithms suffer from a fundamental problem: They can adopt unwanted biases from the data on which they're trained. In healthcare, this can lead to bad diagnoses and care ...
Care for some of the sickest Americans is decided in part by algorithm. New research shows that software guiding care for tens of millions of people systematically privileges white patients over black ...
Stanford found itself in hot water last week after deploying a faulty Covid-19 vaccine distribution algorithm. But the fiasco offers a cautionary tale that extends far beyond Stanford’s own doors — ...
As organizations increasingly replace human decision-making with algorithms, they may assume these computer programs lack our biases. But algorithms still reflect the real world, which means they can ...
This story was originally published by STAT News on Dec. 21, 2020. You can find the original article here. Stanford found itself in hot water last week after deploying a faulty Covid-19 vaccine ...
When OpenAI released its huge natural-language algorithm GPT-3 last summer, jaws dropped. Coders and developers with special access to an early API rapidly discovered new (and unexpected) things GPT-3 ...