No Widgets found in the Sidebar
CEO of Open AI, Sam Altman, suggests UAE as a potential global AI ‘regulatory sandbox’

Sam Altman, CEO of OpenAI, has suggested that the United Arab Emirates (UAE) could serve as a global “regulatory sandbox” for AI technologies. He views the UAE as an ideal starting point for testing and developing rules to limit the use of AI around the world. During a virtual appearance at the World Governments Summit, Altman spoke with the UAE’s AI minister and emphasized the need for a unified policy to regulate future advances in AI.

Altman is well-known for his work in developing ChatGPT, an AI technology developed by OpenAI. He believes that the UAE would be well-positioned to lead global discussions about regulating AI due to its investments in the technology and key policy considerations. These comments come at a time when Altman is seeking investors in the Middle East to support an AI semiconductor initiative. Despite significant investments in AI by the UAE, concerns have been raised in the United States due to its connections to China. Notably, G42, an Emirati AI company, has adjusted its presence in China to comply with demands from the US.

Altman also discussed OpenAI’s plans to make additional large-language models (LLMs) open-source. LLMs are deep-learning algorithms capable of recognizing, summarizing, translating, predicting and generating large amounts of content. He also talked about developing tools for less affluent nations that cannot afford high costs of developing their own AI systems. Altman expressed his desire to create offerings that make sense for these countries and enable them to access AI services.

Overall, Altman sees the UAE as a valuable testing ground for regulatory frameworks related to artificial intelligence technologies while also recognizing potential challenges related to international relations and investment concerns.

It’s worth noting that this article was written before October 2021 when OpenAI announced it will not release any more LLM models until they can be made safe from malicious actors who could use them for nefarious purposes such as deepfakes or cyber attacks.

By Editor

Leave a Reply