NVIDIA

NVIDIA Joins AI Safety Consortium

NVIDIA has joined the National Institute of Standards and Technology’s new U.S. Artificial Intelligence Safety Institute Consortium as part of the company’s effort to advance safe, secure and trustworthy AI.

AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST — an agency of the U.S. Department of Commerce — and fellow consortium members to advance the consortium’s mandate.

NVIDIA’s participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality.

In 2023, NVIDIA endorsed the Biden Administration’s voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation’s National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

Through the consortium, NIST aims to facilitate knowledge sharing and advance applied research and evaluation activities to accelerate innovation in trustworthy AI. AISIC members, which include more than 200 of the nation’s leading AI creators, academics, government and industry researchers, as well as civil society organizations, bring technical expertise in areas such as AI governance, systems and development, psychometrics and more.

In addition to participating in working groups, NVIDIA plans to leverage a range of computing resources and best practices for implementing AI risk-management frameworks and AI model transparency, as well as several NVIDIA-developed, open-source AI safety, red-teaming and security tools.