Skip to content

AI Researcher (Aithos Foundation)

  • Remote, Hybrid
    • Amsterdam
  • €80,000 - €120,000

Job description

About Aithos

The values expressed by AI, and who controls them, will have an enormous impact on our future lives and societies.

Aithos is an independent AI research foundation established together with Ascending AI. It is taking a leading role in the emerging debate on AI values. Our mission is to protect human autonomy and pluralistic societies by making AI system values transparent and steerable at the individual level. We also investigate the coordination problems that emerge when increasingly powerful AI systems and agents with different values interact.

We produce research papers for scientific discussion, accessible materials for public debate, and tools to evaluate and steer AI value systems.

About the Role

We're looking for an AI researcher to join our small team. You'll work alongside interdisciplinary researchers to push the boundaries of AI alignment research, challenging traditional paradigms in AI and decision theory.


This role involves conducting empirical research on how AI systems make value-laden decisions, developing evaluation frameworks and steering methodologies for moral reasoning in AI. You will help publish papers and blog articles on our research and will get the opportunity to build up a profile as an AI safety expert. You'll have significant autonomy to shape research directions while contributing to work that has real-world impact.

Job requirements

You Will

  • Design and execute experiments to evaluate how AI systems navigate complex ethical scenarios and competing values

  • Develop technical infrastructure for testing AI value alignment at scale

  • Build tools to enable the steering of AI systems’ values

  • Contribute to academic publications, research reports, and public-facing content that makes our findings accessible

  • Collaborate with our interdisciplinary team to integrate insights from philosophy, social sciences, and computer science

  • Help position our work within the broader AI safety and alignment community, identifying where our approach diverges from or complements mainstream thinking

  • Write clearly and compellingly for diverse audiences - from academic papers to blog posts

What We're Looking For

We don't need decades of experience. We need someone motivated to do impactful work who's comfortable with ambiguity and philosophical inquiry. You might be:

  • A motivated Master's student or recent graduate eager to contribute to meaningful AI safety research

  • A PhD researcher, postdoc or industry expert that wants to have a large impact on AI safety compared with traditional academia or the tech industry

  • A veteran of AI research who knows the field in and out, questions the status quo, and is looking to change it

Required

  • Experience in AI/ML research with publications (conference papers, preprints, or thesis work)

  • Strong coding skills in Python or equivalent and ability to design and implement experiments

  • Excellent writing ability across formats: academic papers, research reports, and accessible explanations for broader audiences

  • Genuine motivation toward AI safety or doing good through impactful work

  • Openness to critical thought and philosophical inquiry, especially around ethics

Bonus

  • Familiarity with AI safety or AI ethics research

  • Experience with LLMs and implementing evaluations

  • Experience making technical work accessible to non-technical audiences

  • Experience with non-profits and grant applications

  • Experience presenting research publicly through various channels (blogs, talks, videos, social media) and representing the organization to press and other stakeholders

Why Join Aithos?

Most alignment research focuses on technical safety but leaves critical questions out of scope: whose values should AI systems follow? What happens when these conflict? How should alignment work when AI systems interact in complex ecosystems? Who has the authority to decide? At Aithos, we ensure AI alignment is compatible with reality rather than based on assumptions about consensus or uncertainty resolution. You'll work on genuinely novel problems with a team that values intellectual honesty and diverse perspectives, without the constraints of traditional academia or industry.

We're based in Amsterdam with remote flexibility. As a small foundation started in 2025 and aiming for significant growth in coming years, we offer autonomy, the chance to shape research directions, and the opportunity to contribute to work that challenges how AI systems represent human values while working alongside a team of like-minded researchers.

or