23.8 C
New York

Nervous Australia considers restrictions on ‘high-risk’ AI

Published:

The Australian government is planning to implement stricter regulations for artificial intelligence (AI) systems due to public concerns about emerging technologies such as ChatGPT.

On Wednesday (Jan.17), the country’s Industry and Science Minister Ed Husic released the government’s initial response to a consultation on AI safety and responsibility.

Despite predictions that adopting AI could boost Australia’s GDP by up to $600 billion annually, surveys show that only one-third of Australians believe there are currently adequate safeguards in place.

In a 2024 ‘Global Advisor’ survey conducted by pollsters Ipsos, it was found that Australians are more cautious about AI than any other population, with 69% expressing concern about the technology.

Husic stated, “Australians understand the value of artificial intelligence, but they want to see the risks identified and addressed.”

He further added, “We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.”

The paper defines “high-risk” AI as systems used for assessing recidivism risk, job applicants, enabling self-driving vehicles, and rapidly evolving “frontier AI” like ChatGPT, which is also noted for its ability to generate content at scale.

Plans to Address AI Growth in Australia

While consultations on the topic are ongoing, the Labor government has committed to take three immediate steps:

  1. Collaborating with industry to develop a voluntary AI Safety Standard;
  2. Working with industry to develop options for voluntary labeling and watermarking of AI-generated materials;
  3. Establishing an expert advisory group to support the development of options for mandatory guardrails.

The proposals also emphasize increasing transparency. One idea aimed at enhancing public understanding of large language models (LLMs) like ChatGPT is public reporting on the data used to train AI models. The government will also collaborate with industry on voluntary watermarking or labeling of AI-generated content. This complements existing government efforts on harmful AI material and AI use in schools.

Submissions raised legal concerns about utilizing AI for deepfakes or healthcare privacy breaches. Reviews are underway to determine whether training generative AI constitutes copyright infringement. Pointing to concerns from content creators, the paper highlights how models like Dall-E 2 are powered by scraping images and text without permission.

Minister Husic emphasized the importance of incorporating safe and responsible thinking early in the design, development, and deployment of AI.

Featured Image: DALL-E

Related articles

Recent articles