Renowned AI experts call on world governments and tech giants to take action now on AI risks.
“Probably none of us will have a job,” Elon Musk stated about AI when speaking remotely at a leading tech conference in Paris on 23 May. The billionaire hasn’t been shy about referring to AI as his biggest fear. Just hours before at the AI Safety Summit in Seoul on the 21st, 16 companies headlined by Google, Meta and Microsoft pledged to develop AI safely. This declaration was backed by 10 countries and the EU. “[W]e are seeing lifechanging technological advances… And life-threatening new risks – from disinformation to mass surveillance to the prospect of lethal autonomous weapons,” commented UN Secretary-General António Guterres at the global summit.
You say you want an AI revolution
In the journal ‘Science’, 25 of the world’s top academic experts on AI, including the ‘godfathers of AI’ Geoffrey Hinton and Yoshua Bengio, warn that much more needs to be done to reach a global agreement on safety. They present policy measures for research, development and governance to mitigate the risks and threats. The authors argue that we’re not properly prepared to deal with potential AI risks since few resources are dedicated to safeguarding AI technologies’ safe and ethical development and implementation. They urge major tech businesses and public funders to invest more, allocating at least one third of their budgets to address the risks. They also call for global regulators and governments to put in force standards that prevent recklessness and misuse.
Turning discussion into real action
“The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do,” remarked co-author Prof. Philip Torr from the University of Oxford in a news release. “We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off,” explained co-author Dr Jan Brauner, also from the University of Oxford. “Currently, the AI world is focussed on pushing AI capabilities further and further, with safety and ethics as an afterthought. For AI to be a boon, we need to reorient; pushing capabilities is not enough.”
Commission establishes AI Office to strengthen EU leadership in safe and trustworthy Artificial Intelligence |
More information: European Commission
Leave a Reply