Leaders of AI call for regulation to avoid "catastrophic" risks posed by the technology

Artificial Intelligence

Leaders at OpenAI, the developer of ChatGPT, are calling for the regulation of “superintelligent” AIs.

The move follows warnings by researchers of the “catastrophic” and “societal-scale risks” artificial intelligence development poses.

Writing for the company's website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman are calling for any “existential risk” posed by the technology to be reduced, requesting international regulators “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security”.

Together stating:

“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.” “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity had has to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
OpenAI leaders

They also called for “some degree of coordination” in the short-term between companies currently researching and developing powerful AI models.

Commenting on the risks of AI, OpenAI’s leaders stated, “people around the world should democratically decide on the bounds and defaults for AI systems”... “we don’t yet know how to design such a mechanism”.

However, they maintain that AI development should continue.

“We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity)…" “Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
OpenAI leaders

Prime Minister Rishi Sunak met recently with CEOs of OpenAI, Google DeepMind and Anthropic to discuss how to safely control the technology’s expanse.

The Prime Minister stated AI is "the defining technology of our time with the potential to positively transform humanity" whilst also identifying the “existential threats” of AI, including disinformation and national security risks.

Share