Report highlights disagreement among experts on AI safety

admin


An interim artificial intelligence (AI) safety report has highlighted the lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time.

The International scientific report on the safety of advanced AI was among the key commitments to emerge from the Bletchley Park discussions as part of the landmark Bletchley Declaration.

The report explores differing opinions on the likelihood of extreme risks that could impact society, such as large-scale unemployment, AI-enabled terrorism and a loss of control over the technology.

The experts who took part in the report broadly agreed that society and policymakers in government need to prioritise improving their understanding of the impact of AI technology.

The report’s chair, Yoshua Bengio, said: “When used, developed and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.

“Governments, academia and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly and successfully.”

Initially launched as the State of Science report last November, the report unites a diverse global team of AI experts, including an Expert Advisory Panel from 30 AI nations, as well as representatives of the United Nations and the European Union.

Secretary of state for science, innovation and technology Michelle Donelan said: “Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.

“The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.”

The interim report looked at advanced “general-purpose” AI, including AI systems that can produce text and images, and make automated decisions. The final report is expected to be published in time for the AI Action Summit in France, but will now take on evidence from industry, civil society and a wide range of representatives from the AI community.

The Department for Science, Innovation and Technology said this feedback will mean the report will keep pace with the technology’s development, being updated to reflect the latest research and expanding on a range of other areas to ensure a comprehensive view of advanced AI risks.

“Democratic governance of AI is urgently needed, on the basis of independent research, beyond hype,” said Marietje Schaake, international policy director at Stanford University Cyber Policy Center.

“The interim International scientific report catalyses expert views about the evolution of general-purpose AI, its risks and what future implications are. While much remains unclear, action by public leaders is needed to keep society informed about AI, and to mitigate present day harms such as bias, disinformation and national security risks, while preparing for future consequences of more powerful general purpose AI systems.”



Source link

Leave a comment