Skip to content

Funding

We encourage fellow philanthropists to consider co-funding with us.

Active Opportunities

01

National Security

We fund research that advances security capabilities by keeping AI safe. This includes novel approaches to deterrence, security implications of advanced technologies, and strategies for democratic governance. Safe AI is a competitive advantage.

02

Antisemitism

AI models exhibit systematic antisemitic bias even after safety training. We fund research to eliminate antisemitism at the core levels of AI models, including persona vector immunization and related techniques that make bias resistance built-in.

03

Artificial Intelligence and Consciousness

As AI systems grow smarter, questions of consciousness and moral status become urgent. We seek proposals examining self-modeling, self-reference, and how to keep AI systems aligned as they become more sophisticated.

04

Alignment and Safety

We fund research that keeps AI safe as it grows smarter. This includes techniques that ensure safety stays built-in through scaling, fine-tuning, and when AI systems improve themselves.

Awarded Grants

A record of research we have funded.

Grant database coming soon

Our Diligence Process

Competitive proposals will demonstrate:

01

Significance

A clear articulation of why this work matters and how it advances understanding or practice in the field

02

Rigor

A sound methodology grounded in appropriate disciplinary standards

03

Feasibility

A realistic plan for execution within the proposed timeline and budget

04

Impact

A compelling theory of change showing how the work will inform policy, practice, or future research

05

Team

Demonstrated expertise and capacity to execute the proposed work, including reasonable assessments of compute costs

We welcome proposals from independent researchers, academic institutions, research organizations, governments, and policy institutes.