Facial Recognition The Brainwash database, created by Stanford University researchers, contained more than 10,000 images and nearly 82,000 annotated heads.
Facial Recognition , AI
SAN FRANCISCO — Dozens of databases of people’s faces are being compiled without their knowledge by companies and researchers, with many of the images then being shared around the world, in what has become a vast ecosystem fueling the spread of facial recognition technology.
The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurants and on college quads. While there is no precise count of the data sets, privacy activists have pinpointed repositories that were built by Microsoft, Stanford University and others, with one holding over 10 million images while another had more than two million.
The face compilations are being driven by the race to create leading-edge facial recognition systems. This technology learns how to identify people by analyzing as many digital pictures as possible using “neural networks,” which are complex mathematical systems that require vast amounts of data to build pattern recognition.
Tech giants like Facebook and Google have most likely amassed the largest face data sets, which they do not distribute, according to research papers. But other companies and universities have widely shared their image troves with researchers, governments and private enterprises in Australia, China, India, Singapore and Switzerland for training artificial intelligence, according to academics, activists and public papers.
Interested in All Things Tech?
The Bits newsletter will keep you updated on the latest from Silicon Valley and the technology industry.
Companies and labs have gathered facial images for more than a decade, and the databases are merely one layer to building facial recognition technology. But people often have no idea that their faces are in them. And while names are typically not attached to the photos, individuals can be recognized because each face is unique to a person.
A visualization of 2,000 of the identities included in the MS Celeb database from Microsoft.
Questions about the data sets are rising because the technologies that they have enabled are now being used in potentially invasive ways. Documents released last Sunday revealed that Immigration and Customs Enforcement officials employed facial recognition technology to scan motorists’ photos to identify undocumented immigrants. The F.B.I. also spent more than a decade using such systems to compare driver’s license and visa photos against the faces of suspected criminals, according to a Government Accountability Office report last month. On Wednesday, a congressional hearing tackled the government’s use of the technology.
There is no oversight of the data sets. Activists and others said they were angered by the possibility that people’s likenesses had been used to build ethically questionable technology and that the images could be misused. At least one face database created in the United States was shared with a company in China that has been linked to ethnic profiling of the country’s minority Uighur Muslims.
Over the past several weeks, some companies and universities, including Microsoft and Stanford, removed their face data sets from the internet because of privacy concerns. But given that the images were already so well distributed, they are most likely still being used in the United States and elsewhere, researchers and activists said.
“You come to see that these practices are intrusive, and you realize that these companies are not respectful of privacy,” said Liz O’Sullivan, who oversaw one of these databases at the artificial intelligence start-up Clarifai. She said she left the New York-based company in January to protest such practices.
The more ubiquitous facial recognition becomes, the more exposed we all are to being part of the process,” said Liz O’Sullivan, a technologist who worked at the artificial intelligence start-up Clarifai.
“The more ubiquitous facial recognition becomes, the more exposed we all are to being part of the process,” said Liz O’Sullivan, a technologist who worked at the artificial intelligence start-up Clarifai.
“The more ubiquitous facial recognition becomes, the more exposed we all are to being part of the process,” she said.
Google, Facebook and Microsoft declined to comment.
One database, which dates to 2014, was put together by researchers at Stanford. It was called Brainwash, after a San Francisco cafe of the same name, where the researchers tapped into a camera. Over three days, the camera took more than 10,000 images, which went into the database, the researchers wrote in a 2015 paper. The paper did not address whether cafe patrons knew their images were being taken and used for research. (The cafe has closed.)
The Stanford researchers then shared Brainwash. According to research papers, it was used in China by academics associated with the National University of Defense Technology and Megvii, an artificial intelligence company that The New York Times previously reported has provided surveillance technology for monitoring Uighurs.
The Brainwash data set was removed from its original website last month after Adam Harvey, an activist in Germany who tracks the use of these repositories through a website called MegaPixels, drew attention to it. Links between Brainwash and papers describing work to build A.I. systems at the National University of Defense Technology in China have also been deleted, according to documentation from Mr. Harvey.
Stanford researchers who oversaw Brainwash did not respond to requests for comment. “As part of the research process, Stanford routinely makes research documentation and supporting materials available publicly,” a university official said. “Once research materials are made public, the university does not track their use nor did university officials.”
Duke University researchers also started a database in 2014 using eight cameras on campus to collect images, according to a 2016 paper published as part of the European Conference on Computer Vision. The cameras were denoted with signs, said Carlo Tomasi, the Duke computer science professor who helped create the database. The signs gave a number or email for people to opt out.
The Duke researchers ultimately gathered more than two million video frames with images of over 2,700 people, according to the paper. They also posted the data set, named Duke MTMC, online. It was later cited in myriad documents describing work to train A.I. in the United States, in China, in Japan, in Britain and elsewhere.
Duke University researchers started building a database in 2014 using eight cameras on campus to collect images.
Dr. Tomasi said that his research group did not do face recognition and that the MTMC was unlikely to be useful for such technology because of poor angles and lighting.
“Our data was recorded to develop and test computer algorithms that analyze complex motion in video,” he said. “It happened to be people, but it could have been bicycles, cars, ants, fish, amoebas or elephants.”
At Microsoft, researchers have claimed on the company’s website to have created one of the biggest face data sets. The collection, called MS Celeb, included over 10 million images of more than 100,000 people.