Google divests a key AI ethics watchdog

[ad_1]

When Google's CEO Sundar Pichai emailed his employees this month about the company's priorities for 2024, and developing AI responsibly was at the top of the list. Some employees now wonder whether Google can reach that goal. The small team that served as the primary internal AI ethics watchdog has lost its leader and is being restructured, according to four people familiar with the changes. A Google spokesperson says its work will continue in strong form, but declined to provide details.

Google's Responsible Innovation team, known as RESIN, was located inside the Compliance and Integrity office in the company's Global Affairs division. It reviewed internal projects for compatibility with Google's AI Principles, which define rules for the development and use of the technology, a key role in the company's race to compete in generic AI. According to an annual report on AI theory work published this month by Google, RESIN generated more than 500 reviews last year, including the Bard chatbot.

RESIN's role appears uncertain because its leader and founder Jen Zenai, director of responsible innovation, abruptly left the role this month, said the sources, who spoke on condition of anonymity to discuss personnel changes. Had talked on the condition of. Jenai's LinkedIn profile lists her as an AI ethics and compliance advisor at Google as of this month, with sources saying she will be leaving soon based on the consequences of previous departures from the company.

According to sources, Google divided Zenai's team of about 30 people into two parts. Company spokesperson Brian Gabriel says 10 percent of RESIN's staff will remain in place while 90% of the team has been transferred to Trust and Security, which fights abuse of Google services and also resides in the global affairs division. Sources say it does not appear that anyone has been fired. The rationale for the changes and how responsibilities would be divided could not be learned. Some sources say they have not been told how AI principles will be reviewed going forward.

Gabriel declined to say how RESIN's work reviewing AI projects will be handled in the future, but described the change as a sign of Google's commitment to responsible AI development. “This move puts this specialized responsible AI team at the heart of our well-established trust and safety efforts, which are embedded in our product reviews and planning,” he says. “It also allows us to further our responsible innovation work across the company.” “Will help to strengthen and enhance.”


Got any tips?

Are you a current or former Google employee? We would like to hear from you. Using a non-working phone or computer, contact Paresh Dave Paresh_dave@wired.com Or on Signal/Whatsapp/Telegram 1-415-565-1302.

Google has been known to frequently shuffle its ranks, but RESIN has remained largely untouched since the group's inception. Although other teams, and hundreds of additional people, work on AI oversight at Google, RESIN was the most prominent, covering all of Google's core services.

In addition to the departure of its leader, Gennai, RESIN also left one of its most influential members, Sarah Tangdahl, a leading AI theory ethics expert, this month. According to his LinkedIn profile, he is now the responsible AI product director at Salesforce. Tangdal declined to comment and Zenai did not return calls for comment.

AI rebellion

Google created its Responsible Innovation Team in 2018, shortly after the company's AI experts and others publicly rose up in opposition to a Pentagon contract called Project Maven, which used Google algorithms to analyze drone surveillance imagery. it was done. RESIN became the chief steward of a set of AI principles introduced after the protests, which say Google will use AI to benefit people, not as weapons or to undermine human rights. Zenai helped the author with the principles.

Teams across Google can submit projects for review by RESIN, which provides feedback and sometimes blocks ideas seen as violating AI principles. The group blocked the release of AI image generators and voice synthesis algorithms that could be used to create deepfakes.

Unlike the review of privacy risks, it is not mandatory for most teams to seek guidance on AI principles that every project goes through. But Zenai said early review of AI systems helps prevent costly ethical violations. “If properly implemented, Responsible AI makes products better by uncovering and working to reduce harm caused by unfair bias, improving transparency, and enhancing security,” he said during a Google conference in 2022. “