Frontier AI regulation: Managing emerging risks to public safety

Frontier AI regulation: Managing emerging risks to public safety

Frontier AI regulation: Managing emerging risks to public safety PlatoBlockchain Data Intelligence. Vertical Search. Ai.

Shahar Avin (Centre for the Study of Existential Risk, Univeristy of Cambridge)
Miles Brundage (OpenAI)
Justin Bullock (University of Washington; Convergence Analysis)
Duncan Cass-Beggs (Centre for International Governance Innovation)
Ben Chang (The Andrew W. Marshall Foundation)
Tantum Collins (GETTING-Plurality
Network, Edmond & Lily Safra Center for Ethics; Harvard University)
Tim Fist (Center for a New American Security)
Gillian Hadfield (University of Toronto; Vector Institute; OpenAI)
Alan Hayes (Akin Gump Strauss Hauer & Feld LLP)
Lewis Ho (Google DeepMind)
Sara Hooker (Cohere For AI)
Eric Horvitz (Microsoft)
Noam Kolt (University of Toronto)
Jonas Schuett (Centre for the Governance of AI)
Yonadav Shavit (Harvard University) ***
Divya Siddarth (Collective Intelligence Project)
Robert Trager (Centre for the Governance of AI; University of California: Los Angeles)
Kevin Wolf (Akin Gump Strauss Hauer & Feld LLP)

Listed authors contributed substantive ideas and/or work to the white paper. Contributions include writing, editing, research, detailed feedback, and participation in a workshop on a draft of the paper. Given the size of the group, inclusion as an author does not entail endorsement of all claims in the paper, nor does inclusion entail an endorsement on the part of any individual’s organization.

*Significant contribution, including writing, research, convening, and setting the direction of the paper.
**Significant contribution, including editing, convening, detailed input, and setting the direction of the paper.
***Work done while an independent contractor for OpenAI.
†Corresponding authors. Markus Anderljung (markus.anderljung@governance.ai) and Anton Korinek (akorinek@brookings.edu).

Time Stamp:

More from OpenAI