Active Opportunities
National Security
We fund research that advances security capabilities by keeping AI safe. This includes novel approaches to deterrence, security implications of advanced technologies, and strategies for democratic governance. Safe AI is a competitive advantage.
Antisemitism
AI models exhibit systematic antisemitic bias even after safety training. We fund research to eliminate antisemitism at the core levels of AI models, including persona vector immunization and related techniques that make bias resistance built-in.
Artificial Intelligence and Consciousness
As AI systems grow smarter, questions of consciousness and moral status become urgent. We seek proposals examining self-modeling, self-reference, and how to keep AI systems aligned as they become more sophisticated.
Alignment and Safety
We fund research that keeps AI safe as it grows smarter. This includes techniques that ensure safety stays built-in through scaling, fine-tuning, and when AI systems improve themselves.
Awarded Grants
A record of research we have funded.
Grant database coming soon
Our Diligence Process
Competitive proposals will demonstrate:
Significance
A clear articulation of why this work matters and how it advances understanding or practice in the field
Rigor
A sound methodology grounded in appropriate disciplinary standards
Feasibility
A realistic plan for execution within the proposed timeline and budget
Impact
A compelling theory of change showing how the work will inform policy, practice, or future research
Team
Demonstrated expertise and capacity to execute the proposed work, including reasonable assessments of compute costs
We welcome proposals from independent researchers, academic institutions, research organizations, governments, and policy institutes.