L
USD 171000-230534/yr

Principal Engineer, AI Security

Cambridgelead
Data ScienceSecurityOtherSecurity EngineerPrincipal EngineerCybersecurity
2 views0 saves0 applied

Quick Summary

Key Responsibilities

network visibility and control over AI service access, API traffic inspection, and zero trust enforcement; endpoint security for developer machines, research environments, browsers,

Technical Tools
Data ScienceSecurityOtherSecurity EngineerPrincipal EngineerCybersecurity

As a Principal Security Engineer focused on AI Security, you will define and drive the technical strategy for securing how AI is used across Lila's enterprise. You will operate as a senior individual contributor, partnering with IT and business teams to ensure safe and compliant adoption of AI tools and platforms.

While Lila builds AI-powered systems, this role is primarily focused on securing the use of third-party and internally deployed AI tools across the enterprise — ensuring sensitive data, intellectual property, and scientific workflows are protected as AI becomes deeply embedded in how work gets done.

  • Enterprise AI Security Strategy — Define and implement security controls and guardrails for the use of AI tools (e.g., LLM APIs, SaaS AI platforms, and internal AI services) across the organization.
  • AI Gateway & Agentic Gateway Security — Design and implement AI gateway controls to manage and monitor access to external and internal AI systems. Secure agentic workflows by enforcing identity, authorization, tool-use constraints, and policy controls for autonomous or semi-autonomous agents.
  • AI Red Teaming & Adversarial Testing — Conduct red teaming and adversarial testing focused on enterprise AI usage, including prompt injection, data exfiltration, jailbreaks, and abuse of connected tools and plugins.
  • Data Protection for AI Usage — Develop and enforce controls to prevent sensitive data leakage through AI systems, including input/output filtering, data classification, tokenization, and secure handling of prompts, embeddings, and outputs.
  • Multi-Layer AI Security (Network, Endpoint, Data) — Integrate AI security into existing enterprise security layers: network visibility and control over AI service access, API traffic inspection, and zero trust enforcement; endpoint security for developer machines, research environments, browsers, and plugins; data layer controls ensuring proper handling of sensitive data when interacting with AI systems.
  • AI Threat Modeling (Enterprise Context) — Develop threat models focused on enterprise AI usage, including risks such as data leakage, prompt injection, model misuse, supply chain risks from AI vendors, and unauthorized agent actions.
  • Vendor & Platform Security — Assess and guide secure adoption of third-party AI vendors and platforms, including evaluating data handling practices, model behavior, and integration risks.
  • Incident Response for AI Usage — Define and support response approaches for AI-related incidents, such as sensitive data exposure, policy violations, or misuse of AI tools.
  • Cross-Functional Technical Leadership — Partner with Legal, Compliance, IT, and Engineering to align AI usage with regulatory requirements, data governance policies, and responsible AI practices.
  • Security Enablement — Contribute to internal guidance and education on safe AI usage, including secure prompting, data handling, and appropriate use of AI tools.
  • Security Tooling & Implementation — Evaluate and implement tooling for AI security, including AI gateways, DLP integrations, monitoring solutions, and policy enforcement mechanisms.
  • 8+ years of experience in information security, with strong expertise in enterprise, cloud, or application security.
  • Hands-on experience designing and implementing security controls in enterprise environments.
  • Familiarity with AI/ML systems and how modern AI tools (LLMs, copilots, APIs) are used in practice.
  • Experience with cloud platforms (AWS/GCP), SaaS security, and zero trust architectures.
  • Experience with data protection technologies (e.g., DLP, data classification, access controls).
  • Practical experience with threat modeling, red teaming, or adversarial testing.
  • Strong communication and influence skills across technical and non-technical stakeholders.

Nice to Have

~1 min read
  • Experience securing enterprise use of LLMs, copilots, or generative AI platforms.
  • Familiarity with AI gateways, prompt filtering, or model interaction controls.
  • Experience evaluating or securing third-party AI vendors and APIs.
  • Background in regulated environments (biotech, healthcare, defense, or government).
  • Experience with browser security, endpoint controls, or SaaS security platforms.
  • Knowledge of privacy-enhancing technologies or confidential computing.
  • Contributions to AI/ML security research or community.

What We Offer

~1 min read

We offer competitive base compensation with bonus potential and generous early-stage equity. Your final offer will reflect your background, expertise, and expected impact.

What We Offer

~1 min read

Lila Sciences is building Scientific Superintelligence™ to solve humankind's greatest challenges. We believe science is the most inspiring frontier for AI. Rather than hard-coding expert knowledge into tools, LILA builds systems that can learn for themselves.

LILA combines advanced AI models with proprietary AI Science Factory™ instruments into an operating system for science that executes the entire scientific method autonomously, accelerating discovery at unprecedented speed, scale, and impact across medicine, materials, and energy. Learn more at www.lila.ai.

Guided by our core values of truth, trust, curiosity, grit, and velocity, we move with startup speed while tackling problems of historic importance. If this sounds like an environment you'd love to work in, even if you don't meet every qualification listed above, we encourage you to apply.

Lila Sciences is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.

Information you provide during your application process will be handled in accordance with our Candidate Privacy Policy.

Lila Sciences does not accept unsolicited resumes from any source other than candidates. The submission of unsolicited resumes by recruitment or staffing agencies to Lila Sciences or its employees is strictly prohibited unless contacted directly by Lila Science’s internal Talent Acquisition team. Any resume submitted by an agency in the absence of a signed agreement will automatically become the property of Lila Sciences, and Lila Sciences will not owe any referral or other fees with respect thereto.

Location & Eligibility

Where is the job
Location terms not specified
Who can apply
Same as job location
Listed under
Worldwide

Listing Details

Posted
April 16, 2026
First seen
April 16, 2026
Last seen
May 4, 2026

Posting Health

Days active
18
Repost count
0
Trust Level
47%
Scored at
May 5, 2026

Signal breakdown

freshnesssource trustcontent trustemployer trust
Newsletter

Stay ahead of the market

Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.

A
B
C
D
Join 12,000+ marketers

No spam. Unsubscribe at any time.

L
Principal Engineer, AI SecurityUSD 171000-230534