Scientists found out why the AI might be racist and sexist
Failed experiment of Microsoft, with its AI algorithm Tay (Tay), who within 24 hours after the beginning of interaction with people from Twitter has turned into a hardened racist, showed that the newly emerging AI systems can become victims of human prejudice and, in particular, stereotypical thinking. Why this happens – tried to find out a small group of researchers from Princeton University. And interestingly, they succeeded. In addition, they developed an algorithm that can predict the expression of social stereotypes on the basis of intensive analysis of how people communicate with each other online.
On this topic: ( from category Articles )
- “VKontakte” removes the series on the preparation of a bot spotty for flight on the ISS
- Neuroscientists closer to understanding the mysteries of sleep
- “Smart” juicer for $ 400 wiped his nose
- A blind resident of the UK got “glasses with artificial intelligence”
- CityHawk — passenger version of the aircraft Cormorant for urban transportation
- Major update Windows 10 will be released every spring and autumn