Three years after saying it would no longer develop facial recognition technology out of concern for racial profiling, mass surveillance, and other human rights violations, IBM has re-entered the market for such software.
IBM CEO Arvind Krishna informed lawmakers in a letter that the company would no longer provide “general purpose” facial recognition technology in the wake of the Black Lives Matter movement that swept the United States in June 2020 following the murder of George Floyd. The fight against racism, he wrote, is more important now than it has ever been. For example, “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” Towards the end of that same year, the company reaffirmed its dedication, advocating for US export controls in response to worries that facial recognition technology could be used abroad “to suppress dissent, to infringe on the rights of minorities, or to erase basic expectations of privacy.”
Despite these claims, documents reviewed by The Verge and Liberty Investigates, a UK investigative journalism unit, show that IBM signed a $69.8 million (£54.7 million) contract with the British government last month to develop a national biometrics platform that will offer a facial recognition function to immigration and law enforcement officials.
The Home Office Biometrics Matcher Platform contract notice explains that the first phase of the project will focus on fingerprint matching, and that later phases will introduce facial recognition for immigration purposes, which is referred to as “an enabler for strategic facial matching for law enforcement.” This project will conclude with the delivery of a “facial matching for law enforcement use-case.”
The system will function as a “one-to-many” matching system, comparing photos of individuals with those already in a database. IBM predicted in September 2020 that “one-to-many” matching systems like these would be “the type of facial recognition technology most likely to be used for mass surveillance, racial profiling, or other violations of human rights.”
IBM’s spokesman Imtiaz Mufti refuted claims that the company’s work on the contract would delay its 2020 goals. According to him, IBM has stopped providing facial recognition software for general use and is committed to not allowing it to be used for mass surveillance, racial profiling, or other human rights violations by 2020.
Mass surveillance is not conducted using the Home Office Biometrics Matcher Platform and associated Services contract. It helps law enforcement and immigration authorities verify personal information against a fingerprint and photo database. It lacks the video ingest capabilities necessary for use in face-in-crowd biometrics.
However, human rights activists have claimed that IBM’s involvement in the project violates the company’s 2020 promises. Black Lives Matter UK’s Kojo Kyerewaa said, “IBM has shown its willing to step over the body and memory of George Floyd to chase a Home Office contract.” We won’t soon forget this.
According to Amnesty International’s tech researcher, PhD candidate Matt Mahmoudi: “The research across the globe is clear; there is no application of one-to-many facial recognition that is compatible with human rights law, and companies — including IBM — must therefore cease its sale, and honor their earlier statements to sunset these tools, even and especially in the context of law and immigration enforcement where the rights implications are compounding.”
Police use of facial recognition technology has been challenged in U.S. and U.K. courts due to concerns that it may lead to wrongful arrests. In 2019, an independent report on the use of live facial recognition by the London Metropolitan Police Service raised concerns that the force may have violated human rights law because there was no “explicit legal basis” for the use of the technology. UK’s Court of Appeal ruled in August of the following year that the South Wales Police’s use of facial recognition technology was an invasion of privacy and a violation of equality laws because it targeted minorities. After the verdict, the police department put facial recognition on hold but has since resumed its use of the technology.
Some other tech companies have even outright forbidden law enforcement from using their facial recognition services. Amazon and Microsoft both put a halt to selling their facial recognition services to US police departments in the days following IBM’s announcement that it would be leaving the industry.
Amazon said it would ban police agencies from using its Rekognition software for a year beginning in June 2020 and would extend that ban “indefinitely” the following year. The ban on “use of Amazon Rekognition’s face comparison feature by police departments in connection with criminal investigations” is still in effect, as confirmed by a company spokeswoman.
Microsoft announced in June 2020 that until federal legislation is passed regulating the use of facial recognition technology, the company would not sell facial recognition software to US police departments. A Microsoft spokeswoman directed The Verge and Liberty Investigates to the company’s website, where it is stated that the Azure AI Face service “by or for state or local police in the US is prohibited by Microsoft policy.”
The United Kingdom’s Home Office did not provide a statement when contacted.