The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.
NIST is an agency of the US Department of Commerce whose mission is “to promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” According to the agency’s website, its core competencies are “measurement science,” “rigorous traceability” and “development and use of standards.” NIST also develops cybersecurity standards, guidelines and best practices, and released an AI security framework in January 2023.
Christiano, who is known for his ties to the effective altruism (EA) movement and its offshoot, longtermism (a view that prioritizes the long-term future of humanity, popularized by philosopher William MacAskill), was allegedly rushed through the hiring process without anyone knowing until today, one of the sources said.
The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo, has sparked outrage among NIST employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity.
However, Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told VentureBeat that President Biden’s AI Executive Order, introduced in November 2023, specifically asks NIST and the AISI to focus on certain tasks — including CBRN (chemical, biological, radiological and nuclear materials) — for which Paul Christiano is “extremely qualified.”
Many say EA — defined by the Center for Effective Altruism as an “intellectual project using evidence and reason to figure out how to benefit others as much as possible” — has turned into a cult-like group of highly influential and wealthy adherents (made famous by FTX founder and jailbird Sam Bankman-Fried) whose paramount concern revolves around preventing a future AI catastrophe from destroying humanity. Critics of the EA focus on this existential risk, or “x-risk,” say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity.