Safe AI Forum (SAIF) Company Profile
Background
The Safe AI Forum (SAIF) is a 501(c)(3) non-profit organization dedicated to advancing global coordination to minimize extreme risks associated with artificial intelligence (AI) and to ensure that the benefits of AI are equitably shared. Established in late 2023 by co-founders Fynn Heide and Conor McGurk, SAIF emerged from a fiscally sponsored project under FAR.AI and transitioned to independent operations in May 2025. The organization's mission centers on fostering responsible AI governance through shared understanding and collaboration among key global stakeholders.
Key Strategic Focus
SAIF's strategic objectives include:
- International Collaboration: Facilitating dialogues and partnerships among scientists, policymakers, and industry leaders to address and mitigate catastrophic AI risks.
- Research and Advisory Services: Conducting research on AI governance and providing advisory services to organizations aligned with SAIF's mission.
- Educational Workshops: Organizing workshops on critical topics related to AI safety and governance.
The primary markets targeted by SAIF encompass international scientific communities, governmental bodies, and non-profit organizations focused on AI safety and ethics.
Financials and Funding
As a non-profit entity, SAIF's funding details are not publicly disclosed. The organization likely relies on grants, donations, and partnerships to support its initiatives. The transition to an independent non-profit status in May 2025 indicates a strategic move to enhance its operational autonomy and expand its programs.
Pipeline Development
SAIF's flagship program is the International Dialogues on AI Safety (IDAIS), which convenes senior computer scientists and AI governance experts to build international collaboration on extreme AI risks. Notable events include:
- IDAIS-Oxford (2024): The inaugural dialogue held in Oxford, UK, bringing together global experts to discuss AI safety.
- IDAIS-Beijing (2024): The second dialogue in Beijing, China, focusing on international cooperation and establishing red lines for AI development.
- IDAIS-Venice (2024): The third dialogue in Venice, Italy, emphasizing AI safety as a global public good and the urgency of global cooperation.
An upcoming fourth dialogue is planned for the summer of 2025, with details forthcoming.
Technological Platform and Innovation
While SAIF does not develop proprietary technologies, it distinguishes itself through:
- Facilitation of International Dialogues: Creating platforms for global stakeholders to discuss and address AI safety concerns.
- Research on AI Governance: Producing studies and reports that inform policy and best practices in AI safety.
- Educational Initiatives: Conducting workshops and seminars to educate stakeholders on AI risks and mitigation strategies.
Leadership Team
- Fynn Heide: Co-founder of SAIF, instrumental in establishing the organization's mission and strategic direction.
- Conor McGurk: Co-founder of SAIF, playing a key role in program development and international collaborations.
Leadership Changes
In May 2025, SAIF transitioned from being a fiscally sponsored project under FAR.AI to an independent non-profit organization. This strategic move was planned from the outset and represents a significant milestone in SAIF's growth.
Competitor Profile
Market Insights and Dynamics
The AI safety sector has seen significant growth, with increasing recognition of the need for responsible AI development. Organizations are focusing on mitigating risks associated with advanced AI systems, leading to the establishment of various institutes and forums dedicated to AI safety.
Competitor Analysis
- Center for AI Safety (CAIS): A San Francisco-based nonprofit promoting the safe development and deployment of AI through research, advocacy, and field growth.
- International Association for Safe and Ethical AI (IASEAI): A non-profit organization addressing risks and opportunities in AI, focusing on policy development, research, and community-building.
- Partnership on AI (PAI): An organization involved in initiatives promoting responsible AI use, including developing frameworks for safe AI model deployment and supporting research into AI safety and ethics.
Strategic Collaborations and Partnerships
SAIF collaborates with various international organizations and experts to advance its mission. Notably, it has worked closely with FAR.AI during its initial phase and continues to engage with global stakeholders through its IDAIS program.
Operational Insights
SAIF's strategic considerations include:
- Global Engagement: Prioritizing international collaboration to address AI risks comprehensively.
- Focus on Extreme Risks: Concentrating efforts on mitigating catastrophic AI risks that could have widespread societal impacts.
- Educational Outreach: Enhancing awareness and understanding of AI safety through workshops and dialogues.
Strategic Opportunities and Future Directions
Looking ahead, SAIF aims to:
- Expand the IDAIS Program: Increase the frequency and reach of international dialogues to include a broader range of stakeholders.
- Enhance Research Capabilities: Develop in-depth studies on AI governance and safety to inform policy and practice.
- Strengthen Partnerships: Build alliances with other organizations and institutions to amplify impact and share resources.
Contact Information
For more information about SAIF and its initiatives, please visit their official website.