Center For AI Safety
   HOME

TheInfoList



OR:

The Center for AI Safety (CAIS) is a nonprofit organization based in
San Francisco San Francisco, officially the City and County of San Francisco, is a commercial, Financial District, San Francisco, financial, and Culture of San Francisco, cultural center of Northern California. With a population of 827,526 residents as of ...
, that promotes the safe development and deployment of
artificial intelligence Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
(AI). CAIS's work encompasses research in technical
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...
and AI ethics, advocacy, and support to grow the AI safety research field. It was founded in 2022 by Dan Hendrycks and Oliver Zhang. In May 2023, CAIS published a statement on AI risk of extinction signed by hundreds of professors of AI, leaders of major AI companies, and other public figures.


Research

CAIS researchers published "An Overview of Catastrophic AI Risks", which details risk scenarios and risk mitigation strategies. Risks described include the use of AI in autonomous warfare or for engineering pandemics, as well as AI capabilities for deception and hacking. Another work, conducted in collaboration with researchers at
Carnegie Mellon University Carnegie Mellon University (CMU) is a private research university in Pittsburgh, Pennsylvania, United States. The institution was established in 1900 by Andrew Carnegie as the Carnegie Technical Schools. In 1912, it became the Carnegie Institu ...
, described an automated way to discover adversarial attacks of
large language models A large language model (LLM) is a language model trained with Self-supervised learning, self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially Natural language generation, language g ...
, that bypass safety measures, highlighting the inadequacy of current safety systems.


Activities

Other initiatives include a compute cluster to support AI safety research, an online course titled "Intro to ML Safety", and a fellowship for philosophy professors to address conceptual problems. The Center for AI Safety Action Fund is a sponsor of the California bill SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. In 2023, the cryptocurrency exchange FTX, which went bankrupt in November 2022, attempted to recoup $6.5 million that it had donated to CAIS earlier that year.


See also

*
AI safety AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are mor ...
* Center for Human-Compatible AI


References


External links

* {{Existential risk from artificial intelligence Research institutes in the San Francisco Bay Area Non-profit organizations based in San Francisco Artificial intelligence associations