19.3 C
New York

200 AI researchers urge OpenAI, Google, Meta to allow safety checks

Published:

Over 200 prominent artificial intelligence (AI) researchers have penned an open letter urging major players in AI such as OpenAI, Meta, and Google to permit outside experts to independently assess and test the safety of their AI models and systems.

The letter highlights concerns that stringent regulations imposed by tech companies to prevent the misuse of AI tools are inadvertently hindering crucial independent research essential for scrutinizing these systems for potential risks and vulnerabilities.

Notable individuals endorsing the letter include Percy Liang from Stanford University, Pulitzer Prize-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, AI ethics researcher Deb Raji, and former government advisor Suresh Venkatasubramanian.

What are the AI researchers worried about?

The researchers argue that AI company policies restricting certain forms of testing and prohibiting breaches of copyright, creation of deceptive content, or other misuses are being enforced too broadly. This approach has created a “chilling effect” where auditors fear being banned or facing legal consequences for stress-testing AI models without explicit permission.

The letter emphasizes that companies developing generative AI should learn from the missteps of social media platforms, many of which have effectively blocked research efforts aimed at holding them accountable.

The letter comes amidst escalating tensions, with AI companies like OpenAI accusing The New York Times of “hacking” for probing copyright concerns in ChatGPT. Meta has amended its terms, warning of revocation if their latest language model is used for intellectual property infringement.

Researchers assert that companies should establish a “safe harbor” for responsible auditing, along with avenues to report potential vulnerabilities found during testing in a constructive manner, rather than resorting to exposing issues on social media platforms.

Borhane Blili-Hamelin from the AI Risk and Vulnerability Alliance remarks, “We have a flawed oversight system. Sure, people identify issues, but the only way to make an impact is through these ‘gotcha’ moments where the company is caught off guard.”

The letter and associated policy proposal seek to cultivate a more collaborative environment for external researchers to scrutinize the safety and potential risks of AI systems affecting millions of users.

Featured image: Ideogram

Related articles

Recent articles