Senior Software Engineer ⋅ Generalist

  • AI Safety
  • Generative Models
  • Access Control

About Us

At Noema Research, we recognize the transformative potential of generative models, as well as their dual-use nature. Our mission is to genuinely move the needle on challenges posed by generative models. In brief:

  • We believe generative systems have the potential for significant positive impact across various sectors, but they also carry risks that need to be explicitly addressed.
  • Current safety infrastructure for generative deployments is inadequate, reminiscent of the early days of the internet. We're focused on developing robust infrastructure that can reliably address the risks posed by modern generative systems.
  • We believe the challenge is tractable. We're making progress on developing tooling for ensuring the responsible deployment of generative systems.

About the Role

We're seeking a generalist, senior software engineer to help develop tooling for managing generative capabilities. Your work will involve:

  1. Designing, developing, and testing tools such as:

    • a minimal platform for connecting cybersecurity enthusiasts (and autonomous systems) to turnkey remote environments for solving procedural "Capture The Flag" challenges. Think of it as Stadia for pentesting, but without the latency. The solution currently builds on: Node, Firebase, and Terraform.
    • a control plane for generative capabilities, including tools for defining and enforcing access policies at inference time. This system aims to provide fine-grained control over a generative model's capabilities. The solution currently builds on: Python, Docker, and OAuth.
    • an internal 20% project we've recently open-sourced, designed to improve developer velocity in AI-assisted codebase-level development workflows. Have a look at it here.
  2. Collaborating across the team:

    • Work with security-focused peers to build tools for managing security capabilities of generative systems and implement security best practices in our development process.
    • Partner with infrastructure-focused colleagues to architect and provision cloud infrastructure for scalable deployments of our tools.
    • Support our nascent research efforts by developing tooling for state-of-the-art interpretability and robustness techniques.

The role is full-time, remote-first (Europe), with quarterly team retreats around Europe. We're looking for candidates with a strong background in software engineering, a knack for managing software projects, and a passion for AI safety.

About You

We're looking for candidates who demonstrate:

  1. Craftmanship in programming: You approach different programming languages and frameworks as tools, each with their own strengths, though you have an enduring passion for boring technology. You have 5+ years of experience using a diverse toolkit across projects and can quickly adapt to new technologies as needed, such as the ones mentioned in the role description.

  2. Enthusiasm for AI-assisted development: You're excited about the evolving role of developers as generative models become more integrated into workflows. You constantly seek ways to improve your own productivity and that of your team using such tools, and others.

  3. Collaboration and open-source mindset: You're a team player with a track record of making open-source contributions and managing small-to-medium software projects. You're not afraid to take ownership of unglamorous-yet-critical work to ensure team success.

  4. Balanced approach to development: You can both quickly prototype ideas and architect robust, scalable systems. Your project history demonstrates the ability to choose the right approach for each situation.

  5. Proactivity and agency: You take initiative beyond your assigned responsibilities to help the team succeed. You're driven by the broader goal of solving real safety problems in generative deployments and have a history of going beyond the boundaries of codebases, departments, or paradigms.

Benefits

  • Competitive salary and equity package.
  • Flexible work hours and remote-first culture.
  • Health and wellness stipend.
  • Professional development stipend for conferences and training.
  • Context for upskilling in AI safety.
  • Quarterly team retreats around Europe.

Application Process

  1. Complete the application form linked below.
  2. Within a few minutes, you'll receive an email containing instructions for tackling a coding challenge and a code review.
  3. After completing the two, you'll receive an email with instructions for booking a technical and a culture interview with our team.
  4. We'll get back to you with a decision within a week of the interviews.

We look forward to hearing from you and potentially welcoming you to our team as we work to shape the future of AI safety. If you have any questions, please don't hesitate to reach out. We encourage applications from candidates of all backgrounds and experiences.

Join us in securing generative deployments.

We’re always looking for talented individuals to join our team. If you have a passion for generative models, security, and infrastructure, we’d love to hear from you.