Contribute to your City
by James Vincent@jjvincent Sep 21, 2017, 1:24pm EDT
Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like âNew AI can guess whether you're gay or straight from a photographâ and âAI Can Tell If You're Gay From a Photo, and It's Terrifying.â
As you might have guessed, itâs not as straightforward as that. (And to be clear, based on this work alone, AI canât tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paperâs authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someoneâs sexual orientation, but their political views, criminality, and even their IQ.