AI Weekly: EU facial recognition ban highlights need for U.S. legislation

Source Node: 1125985

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


This week, The European Parliament, the body responsible for adopting European Union (EU) legislation, passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places. The resolution, which also proposes a moratorium on the deployment of predictive policing software, would restrict the use of remote biometric identification unless it’s to fight “serious” crime, such as kidnapping and terrorism.

The approach stands in contrast to that of U.S. agencies, which continue to embrace facial recognition even in light of studies showing the potential for ethnic, racial, and gender bias. A recent report from the U.S. Government Accountability Office found that 10 branches including the Departments of Agriculture, Commerce, Defense, and Homeland Security plan to expand their use of facial recognition between 2020 and 2023 as they implement as many as 17 different facial recognition systems.

Commercial face-analyzing systems have been critiqued by scholars and activists alike throughout the past decade, if not longer. The technology and techniques — everything from sepia-tinged film to low-contrast digital cameras — often favor lighter skin, encoding racial bias in algorithms. Indeed, independent benchmarks of vendors’ systems by the Gender Shades project and others have revealed that facial recognition technologies are susceptible to a range of prejudices exacerbated by misuse in the field. For example, a report from Georgetown Law’s Center on Privacy and Technology details how police feed facial recognition software flawed data, including composite sketches and pictures of celebrities who share physical features with suspects.

At least three people in the U.S. — all of whom were Black men — have been wrongfully arrested based on poor facial recognition matches. In Detroit, which began piloting facial recognition software in 2017, police in 2020 used the technology to conduct upwards of 100 searches of suspects and made more than 80 arrests in cases where a possible match was identified, according to the Detroit Police Department’s public record. Overseas, the facial recognition technology used by the U.K.’s Metropolitan Police in 2019 was found to be 81% inaccurate, mistakenly targeting four out of five innocent people as wanted suspects, a University of Essex whitepaper commissioned by Scotland Yard found.

Still, the global facial recognition market is expected to be worth $4.45 billion in 2021 — and many governments are clamoring for the technology. Vendors like AnyVision and Gorilla Technologies are alleged suppliers for Taiwanese prisons and Israeli army checkpoints in the West Bank. Huawei has tested software that could reportedly recognize the face of a member of the Uighur minority group. And Clearview, which has scraped 10 billion mugshots from the web to develop its facial recognition systems, claims to have 3,100 law enforcement and government customers, including the FBI and U.S. Customs and Border Protection.

In 2019, only half of U.S. adults said that they trusted law enforcement to use facial recognition responsibly, according to a Pew Research Center poll. A plurality of government employees themselves view AI technologies like facial recognition with suspicion. According to a recent Gartner poll, only 53% of employees at Asia Pacific, Europe, Latin America, and North America public agencies who’ve worked with AI technologies believe that the technologies provide insights to do their job better. Among those who haven’t used AI, the share was 34%, reflecting concern about the technology’s impact.

In lieu of U.S. federal regulation, some states, cities, and even companies have taken matters into their own hands. Oakland and San Francisco in California and Somerville, Massachusetts are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including facial images. New York recently passed a moratorium on the use of biometric identification in schools until 2022, and lawmakers in Massachusetts and Maine have advanced a suspension of government use of any biometric surveillance system within the commonwealth. More recently, Maine approved a ballot initiative banning the use of facial recognition by police and city agencies. And Amazon, IBM, and Microsoft have self-imposed moratoriums on the sale of facial recognition systems.

But as evidenced by the EU’s overtures, it’s becoming clear that more comprehensive guidelines will be needed to regulate facial recognition technologies coming into public sector use. U.S. Senators Bernie Sanders (I-Vt.), Elizabeth Warren (D-Mass.), and Ron Wyden (D-Ore.) among others have proposed legislative remedies, but given the current gridlock on Capitol Hill, they’re likely to remain stalled for the foreseeable future.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member

Source: https://venturebeat.com/2021/10/08/ai-weekly-eu-facial-recognition-ban-highlights-need-for-u-s-legislation/

Time Stamp:

More from AI – VentureBeat