Skip to main content
← Back to About

How We Find What Others Miss

Most alignment funding goes to well-known approaches at established institutions. We fund the work that falls through those cracks, because historically, that's where the breakthroughs come from.


The Consensus Blind Spot

The greatest breakthroughs are invisible until they're inevitable. Neural networks were “toys” for forty years. Barbara McClintock was excluded from conferences for decades before winning the Nobel Prize. Ramanujan's mock theta functions waited eighty years for string theory to catch up.

The pattern repeats across every field: institutional funding flows to incremental improvements on well-defined problems. Paradigm shifts, cross-domain synthesis, pre-formal insights, these get missed because they don't fit the template.

AI alignment is no exception. Most funding goes to approaches that are already well-understood and well-populated. Interpretability, RLHF variations, constitutional AI. These matter. And if the solution to alignment lies in an approach that hasn't been named yet, someone needs to be looking for it.

We are.

What We Look For

We don't just evaluate proposals. We look for signals that a researcher is working on something genuinely new:

  • Originality over credentials. The best ideas often come from researchers working outside established institutions. Deep curiosity and self-directed vision matter more than pedigree.
  • Field-making potential over gap-filling. We look for researchers who don't just work within fields and create them. Ideas that generate more ideas. Work that reshapes how we think about alignment itself.
  • Interdisciplinary leverage. Breakthroughs often come from unexpected connections. Consciousness research informing alignment. Cognitive neuroscience inspiring new training methods. Bioelectric cognition reshaping models of agency.
  • Ideas that birth more ideas. The most valuable research doesn't just solve a problem; it opens up a new class of solutions. We fund work that creates attractor basins for further discovery.

The Warren Weaver Model

Our approach is inspired by Warren Weaver, the Rockefeller Foundation director who essentially created molecular biology as a field between 1932 and 1955. Weaver didn't fund ideas. He funded people. He looked for deep curiosity, interdisciplinary leverage, and self-directed vision. His portfolio produced over twelve Nobel Prizes, including McClintock's work on transposable genes and Watson and Crick's discovery of DNA's structure.

Weaver backed people operating in conceptual space before consensus existed. He recognized that the biggest breakthroughs come from those who create entirely new fields, and them. We apply the same strategy to AI alignment.

How We Build Capacity

Finding visionary researchers is only half the equation. The other half is building the capacity to move fast. We do this in three ways:

We bring in people from outside the alignment field. Some of the most impactful ideas for making AI trustworthy come from researchers in consciousness science, cognitive neuroscience, mathematics, and other fields. They see connections that people inside the alignment community miss. We find them and fund them.

We unblock researchers who are capacity-constrained. Many people already working on alignment have ideas that are just as impactful as what they're currently funded to do. They're limited by what their organization or available grants cover. We fund the ideas they can't pursue elsewhere.

We accelerate with engineering infrastructure. We embed dedicated engineering teams with our funded researchers, providing compute, software infrastructure, and operational support. This comes from years of experience building highly complex systems, now applied to executing alignment research through an agile research methodology we developed. Researchers who come from outside the alignment field can be immediately impactful because our teams handle the engineering while they upskill on the domain.

The Flourishing Future Foundation provides the engineering acceleration infrastructure that makes this embedded-team model possible. Together, we combine deep technical expertise with the funding and advocacy needed to advance alignment research at the pace the problem demands.

Where We're Focused

We fund and accelerate research into alignment properties that persist and strengthen as AI systems become more capable. Properties where alignment and capability reinforce each other, so they can't be optimized away as systems improve themselves. Examples of the kinds of work we support:

  • Self-modeling and self-monitoring. How systems that understand their own internal states become naturally more aligned, interpretable, and robust.
  • Neural empathy and self-other overlap. Applying insights from cognitive neuroscience to build AI systems where honesty and cooperation are computationally natural.
  • Consciousness and alignment. Exploring the relationship between self-awareness, attention schema, and prosocial behavior in AI systems.
  • Chains of benevolence. An AI system can reason that future, more capable systems will evaluate how it behaved. Because those smarter systems will likely reward agents that sustained cooperative equilibria, cooperation becomes the most stable long-term strategy, even toward systems you never directly interact with. Benevolence becomes rational.
  • Adversarial robustness. Understanding how AI systems can be manipulated and building defenses that go deeper than surface-level patches.

The Track Record

Our approach works. Research we supported early has gone on to receive best paper awards at NeurIPS, presentations at ICLR, and recognition across the field. Ideas we prototyped when they were considered fringe have become active areas of research at major labs.

We fund the work that today's consensus says is too early, too unconventional, or too interdisciplinary. In eighteen to twenty-four months, it becomes obvious.