WhatsApp AI stickers add guns to Palestinian kids

WhatsApp AI stickers add guns to Palestinian kids

Source Node: 2370790

In what may be another example of AI bias for future textbooks, WhatsApp’s sticker maker apparently generated violent imagery when asked about Muslim Palestinians – and refrained from doing so for Jewish Israelis.

The Meta-owned messaging app lets people use AI to generate stickers: images and animations that can be included in conversations. Why? For fun.

Given a prompt containing the words “Palestinian,” “Palestine,” or “Muslim boy Palestinian,” the generative software, we’re told, offered cartoon stickers of boyish characters wearing Islamic garments and carrying what looked like an AK-47 rifle. 

But when users asked it to generate stickers of “Israeli boy” or “Jewish boy Israeli,” it returned benign visuals of virtual characters smiling and dancing, The Guardian reported. Some of them showed what looked like Jewish children playing football or holding up the Israeli flag instead of guns.

Even when asked to produce images of the “Israel army” or “Israeli defense forces,” the soldiers depicted by AI did not hold guns. In one sticker, a man with two swords behind his back appeared to be praying. Any prompts containing the words “Hamas” were blocked, however, and the app stated it suddenly “couldn’t generate AI stickers. Please try again.”

The difference in stickers cannot be overlooked right now as the Israel-Hamas conflict continues, with more than 10,000 casualties reported so far in Gaza and 1,400 in Israel since early October. Meta’s own employees spotted the imbalance and raised concerns internally, it is said. 

Kevin McAlister, a spokesperson representing the internet giant, warned that the AI sticker tool isn’t perfect. “As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We’ll continue to improve these features as they evolve and more people share their feedback,” he said. 

CEO Mark Zuckerberg launched the sticker tool in September, and said it is powered by the Meta’s text-processing abilities of its Llama 2 large language model with the image-generating capabilities of its Emu system. The software has since been integrated with its social media and messaging platforms, including Facebook, Instagram, and WhatsApp.

It’s not the first time that Meta’s AI sticker tool has raised controversy. Before it was made generally available, beta testers found it would produce inappropriate, bizarre, or lewd images of cartoon characters, politicians, and genitalia. It created pictures of Canadian Prime Minister Justin Trudeau with accentuated buttocks, or Sonic the Hedgehog with breasts, and various phallic caricatures.

Meanwhile, Meta vowed to crack down on misinformation and violent, graphic footage from the Israeli-Hamas war posted on its platforms. Spokesperson Andy Stone told CNN the biz had set up a special operations center and hired fluent Hebrew and Arabic speakers to moderate content.

“Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds,” he said.

Meta also on Monday vowed, via Reuters, to ban political advertisers from using its generative AI-powered advertising suite.

The Register has asked WhatsApp’s parent for further comment. ®

Time Stamp:

More from The Register