India, grappling with election misinformation, is considering labels and its own AI security coalition

[ad_1]

India, in the long run When it comes to adopting technology to sway the public, it has become a global hotspot for political discussion and especially the use and misuse of AI in the democratic process. Tech companies, which first created the devices, are traveling to the country to push solutions.

Earlier this year, Andy Parsons, a senior director at Adobe who oversees its participation in the cross-industry Content Authenticity Initiative (CAI), stepped into the whirlpool when he visited the country with media and tech organizations. Traveled to India for. Promote tools that can be integrated into content workflows to identify and flag AI content.

He said in an interview, “Instead of trying to figure out what is fake or manipulated, we as a society, and this is an international concern, should start declaring authenticity, which means if “Consumers should know if something is generated by AI.”

Parsons said some Indian companies – not currently part of the Munich AI election security agreement signed by OpenAI, Adobe, Google and Amazon in February – intend to form a similar alliance in the country.

“Making laws is a very complicated thing. It is difficult to assume that the government will legislate correctly and quickly in any jurisdiction. It is better for the government to take a very steady stance and take its time,'' he said.

Testing tools are famously inconsistent, but they're a start toward fixing some problems, or so the logic goes.

“The concept is already well understood,” he said during his visit to Delhi. “I am helping to raise awareness that the tools are also ready. This is not just an idea. It's something that's already deployed.”

Andy Parsons, Senior Director of Adobe

Andy Parsons, senior director of Adobe. Image Credit: Adobe

CAI – which promotes royalty-free, open standards for identifying whether digital content was generated by a machine or a human – predated the existing hype around generative AI: it was founded in 2019 and now includes Microsoft, There are 2,500 members including Meta and Google. , The New York Times, The Wall Street Journal and the BBC.

Just as there is an industry growing around the business of leveraging AI to create media, there is also a small industry being created to try to perfect some of its more nefarious applications.

So in February 2021, Adobe went a step further in creating one of those standards and co-founded the Coalition for Content Provisioning and Authenticity (C2PA) with ARM, BBC, Intel, Microsoft, and Truepic. The goal of the alliance is to develop an open standard that uncovers the provenance of images, video, text, and other media by tapping into their metadata and telling people about the origin of the file, the place and time of its creation, and Was it changed before arriving? the user. CAI works with C2PA to promote the standard and make it available to the public.

It is now actively engaging with governments like India to adopt that standard to uncover the origin of AI content and participate with authorities in developing guidelines for the advancement of AI.

Adobe has nothing to lose by taking an active role in this game. It's not yet acquiring or building big language models of its own, but as the home of apps like Photoshop and Lightroom, it's the market leader in tools for the creative community, and so not only is it making new products like Firefly. is building AI content at its core, but incorporating legacy products with AI. If the market evolves as some believe it will, Adobe will need to have AI in it if it wants to stay on top. If regulators (or common sense) have their way, Adobe's future may depend on how successful it is in ensuring that what it sells doesn't contribute to the mess.

In any case the big picture of India is really messed up.

Google focused on India to test how Google will curb the use of its generic AI tool Gemini when it comes to election content; Parties are weaponizing AI to create memes featuring opponents; Meta has set up a deepfake “helpline” for WhatsApp, such is the messaging platform's popularity in spreading AI-powered messages; And at a time when countries are becoming concerned about AI safety and what they need to do to ensure it, we have to look at the Indian government's move in March to relax rules on how new AI models can be created. What effect will it have? , tested and deployed. This is of course meant to encourage more AI activity at any cost.

Using its open standard, C2PA has developed a digital nutrition label for content called Content Credentials. CAI members are working to deploy digital watermarks on their content so that users can know its origin and whether it is AI-generated. Adobe has content credentials across all of its creative tools, including Photoshop and Lightroom. It automatically connects to AI content generated by Adobe's AI model Firefly. Last year, Leica launched its cameras with built-in content credentialing, and Microsoft added content credentialing to all AI-generated images created using Bing Image Creator.

Content credentials on AI-generated images

Image Credit: material certificate

Parsons told TechCrunch that CAI is talking with global governments on two areas: one to help promote the standard as an international standard, and the other is to get it adopted.

“In an election year, it is especially important for candidates, parties, sitting offices and administrations who release material to the media and the public at all times to ensure that if something is released by the PM This can be known. [Narendra] Modi's office is actually PM Modi's office. There have been many incidents where this is not the case. So, understanding whether something is actually authentic is very important for consumers, fact-checkers, platforms and intermediaries,” he said.

He said curbing misinformation has become challenging due to India's large population, vast language and demographic diversity and voted in favor of simpler labels to reduce it.

“It's a little 'CR'… it's two Western letters like most Adobe tools, but it indicates there's more context to be shown,” he said.

There is ongoing controversy over what the real issue might be behind tech companies supporting any type of AI security measure: whether it is actually about existential concern, or simply to give the appearance of existential concern. Is it about having a seat at the table, ensuring that their interests are protected in the rule-making process?

“It's not generally controversial with the companies that are involved, and all the companies that recently signed the Munich Agreement, including Adobe, who came together, they gave up competitive pressure because these ideas are something That's what we all need to do,” he said in defense of the work.