Artificial intelligence (AI) presents a significant threat to humanity, according to OpenAI CEO Sam Altman. The innovator suggested during a visit to the United Arab Emirates that an international agency, such as the International Atomic Energy Agency, should oversee the technology. Altman, who is on a global tour discussing AI, emphasised the importance of managing such risks while still benefitting from the technology. OpenAI’s ChatGPT chatbot, which offers essay-style answers to user questions, has raised concerns about how AI could transform the way humans work and learn. Microsoft has invested around $1bn in OpenAI.
In May, hundreds of industry leaders, including Altman, issued letters stressing that averting the risk of AI-related extinction should be a global priority on par with mitigation of societal-scale risks such as pandemics and nuclear war. Altman pointed to the IAEA, created in the aftermath of the US atomic bombings of Japan in 1945, as an example of international oversight of hazardous technology. He said governments should work together to impose safeguards on AI as it became increasingly dangerous.
There are, however, concerns about how autocratic states, such as the UAE, handle AI. Freedom of speech in the federation of seven hereditarily ruled sheikhdoms remains tightly controlled, according to rights groups. Such controls can impact the accuracy of information that AI programs rely on to provide answers, including ChatGPT.
Andrew Jackson, CEO of Inception Institute of AI at G42, which is connected to Abu Dhabi’s powerful national security adviser Sheikh Tahnoun bin Zayed Al Nahyan, also spoke at the event. Peng Xiao, the CEO of G42, previously ran Emirati security firm DarkMatter subsidiary Pegasus. Jackson spoke of Abu Dhabi’s AI ecosystem and described it as a political powerhouse, which would become central to global AI regulation.