Meta's Oversight Board investigates explicit AI-generated images posted on Instagram and Facebook

[ad_1]

The Oversight Board, Meta's semi-independent policy council, is turning its attention to how the company's social platforms are handling explicit, AI-generated images. On Tuesday, it announced investigations into two separate cases of how Instagram in India and Facebook in the US handled AI-generated images of public figures, after Meta's systems failed to detect and respond to explicit content. Were staying.

In both cases, the sites have now removed the media. According to an e-mail Meta sent to TechCrunch, the board is not naming individuals targeted by AI images “to avoid gender-based harassment.”

The Board considers matters related to Meta's moderation decisions. Users must first appeal to Meta about the moderation move before contacting the Oversight Board. The Board is due to publish its full findings and conclusions in the future.

cases

Describing the first case, the board said a user reported an AI-generated nude photo of an Indian public figure on Instagram as pornography. The image was posted by an account that specifically posts images of Indian women created by AI, and the majority of users reacting to these images are based in India.

Meta failed to remove the image after the first report, and the report ticket was automatically closed after 48 hours after the company did not review the report further. When the original complainant appealed the decision, the report was again automatically closed without any oversight from Meta. In other words, after two reports, the apparent AI-generated image remained on Instagram.

After this the user finally appealed to the board. The company took action at that point only to remove the offensive content and removed the image for violating its community standards on bullying and harassment.

The second case involved Facebook, where a user posted an apparent, AI-generated image that resembled an American public figure in a group focusing on AI creations. In this case, the social network removed the image because it was previously posted by another user, and Meta added it to the media matching service bank under the category “Offensive Sexual Photoshop or Images”.

When TechCrunch asked why the Board selected a case where the company successfully removed an explicit AI-generated image, the Board said it selects cases “that are emblematic of broader issues on Meta's platforms.” ” It said these cases help the advisory board look at the global effectiveness of META's policy and procedures for different topics.

“We know that Meta is faster and more effective at moderating content in some markets and languages ​​than others. With one case from the US and one from India, we want to see whether META is fairly protecting all women globally,” Oversight Board Co-Chair Hayley Thorning-Schmidt said in a statement.

“The Board believes it is important to explore whether Meta's policies and enforcement practices are effective in addressing this problem.”

The deep problem of fake porn and online gender-based violence

Some – not all – generative AI tools have expanded in recent years to allow users to generate porn. As TechCrunch previously reported, groups like Unstable Diffusion are trying to monetize AI porn with unclear ethical standards and bias in the data.

Deepfakes have also become a concern in regions like India. Last year, a BBC report said that the number of deepfake videos of Indian actresses has increased in recent times. Data shows that women are most commonly the subjects of deepfake videos.

Earlier this year, Deputy IT Minister Rajiv Chandrashekhar had expressed dissatisfaction over the approach of tech companies to counter deepfakes.

Chandrashekhar said in a press conference at the time, “If any platform thinks they can get away without removing deepfake videos, or simply maintain a casual approach, then we have the power to protect our citizens by blocking such platforms.” Has the power of.”

Although India has considered bringing specific deepfake-related regulations into law, nothing has been decided yet.

While the country has provisions under the law to report online gender-based violence, experts say the process can be difficult, and there is often little support. In a study published last year, Indian advocacy group IT for Change said courts in India need stronger processes to address online gender-based violence and should not trivialize these cases.

Aparajita Bharti, co-founder of India-based public policy consulting firm The Quantum Hub, said there should be limits on AI models to prevent them from creating explicit content that could cause harm.

“The main risk of generative AI is that the amount of such content will increase because such content is easier to generate and with a higher level of sophistication. So, if the intention to harm someone is already clear, then we first need to stop the creation of such content by training an AI model to limit the output. We should also introduce default labeling for easy detection,” Bharti told TechCrunch over an email.

There are currently only a few laws globally that address the production and distribution of porn generated using AI tools. A handful of US states have laws against deepfakes. Britain this week introduced a law to criminalize the creation of sexually explicit AI-powered imagery.

Meta's response and next steps

In response to the Oversight Board's cases, Meta said it had removed both pieces of content. However, the social media company did not address the fact that it failed to remove content on Instagram following initial reports from users or how long the content was on the platform.

Meta said it uses a mix of artificial intelligence and human review to detect sexually suggestive content. The social media giant said it doesn't recommend this kind of content in places like Instagram Explore or Reels recommendations.

The Oversight Board has sought public comments on the matter with a deadline of April 30, including the potential harms caused by deep fake porn, relevant information about the spread of such content in regions such as the US and India, and Meta's approach. Has been detected. AI-generated clear imagery.

The Board will examine cases and public comments and post a decision on the site in a few weeks.

These cases indicate that large platforms are still struggling with outdated moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI for content creation, with some efforts to detect such imagery. In April, the company announced it would apply a “Made with AI” badge to deepfakes if it could detect the content using “industry standard AI image indicators” or user disclosure.

However, criminals are constantly finding ways to evade these detection systems and post problematic content on social platforms.