Staff AI Security Engineer
Quick Summary
Note : We have even more benefits than listed here and below, your recruiter will provide more in-depth information as you continue in the interview process. Benefits are subject to individual plan
At Spring Health, we’re on a mission to revolutionize mental healthcare by removing every barrier that prevents people from getting the help they need, when they need it. Our clinically validated technology, Precision Mental Healthcare, empowers us to deliver the right care at the right time—whether it’s therapy, coaching, medication, or beyond—tailored to each individual’s needs.
We proudly partner with over 450 companies, from startups to multinational Fortune 500 corporations, as a leading provider of mental health service, providing care for 10 million people. Our clients include brands you use and know like Microsoft, Target, and Delta Airlines, all of whom trust us to deliver best-in-class outcomes for their employees globally. With our innovative platform, we’ve been able to generate a net positive ROI for employers and we are the only company in our category to earn external validation of net savings for customers.
We have raised capital from prominent investors including Generation Investment, Kinnevik, Tiger Global, Northzone, RRE Ventures, and many more. Thanks to their partnership and our latest Series E Funding, our current valuation has reached $3.3 billion. We’re just getting started—join us on our journey to make mental healthcare accessible to everyone, everywhere.
We are actively seeking a Staff AI Security Engineer to join our team. Reporting to the CISO, you will define and evolve our AI security strategy to protect highly sensitive mental health data across both product and corporate environments.
Please note that this is a hybrid role based in San Francisco, with an expectation to be in the office 2–3 days per week at our 2 Embarcadero Ctr. location. Candidates must be based in the San Francisco metro area or able to relocate independently within 90 days of their start date. Occasional travel will be required for team on-sites.
Responsibilities
~1 min read- →Define and evolve our AI security strategy to protect highly sensitive mental health data across both product and corporate environments
- →Lead secure design and threat modeling for AI systems including LLMs, agentic workflows, and retrieval pipelinesIdentify and mitigate risks such as prompt injection, data exfiltration, model abuse, and privilege escalation
- →Build scalable AI security guardrails and tooling that enable safe experimentation across engineering and business teams
- →Establish AI-specific governance frameworks covering identity, access control, auditability, and observability
- →Take ownership of and lead our AI Red Team to proactively identify vulnerabilities
- →Design and implement AI observability pipelines to detect anomalous model behavior and policy violations in near real-time
- →Develop and operationalize AI incident response playbooks to ensure rapid containment of security eventsPartner with product and engineering teams to enable responsible AI innovation in a hyper-growth environment
- →Champion a culture of secure AI development by mentoring engineers and defining high standards for the organization
- 80% of new AI product features are threat modeled prior to GA
- 80% of AI features are tested by the AI Red Team or equivalent adversarial testing before GA
- Achieve >=70% coverage of production AI features with automated LLM vulnerability testingGrow participation in the AI Red Team by 10% YoY
- Develop AI incident response playbooks and conduct at least one AI-focused tabletop or live simulation per year
- 10+ years experience in a software engineering discipline, with at least 5+ years focused on security
- Hands-on experience securing AI/ML systems, including practical AI red teaming against LLMs, agentic workflows, or RAG systems
- Experience developing or implementing automated LLM vulnerability testing for prompt injection and data exfiltrationStrong foundation in application security principles, threat modeling, secure design, and identity and access control
- Demonstrated ability to build tools and automation with a developer mindset
- Experience influencing senior engineers and cross-functional stakeholders across product, legal, and complianceProven track record of mentoring engineers and cultivating a strong security culture across an organization
- Strong working knowledge of modern developer tooling, CI/CD pipelines, and git-based collaboration
- Ability to operate in ambiguity and translate emerging AI risks into pragmatic, scalable security controls
- Deep personal ownership and a passion for advancing AI security through continuous learning
The target base salary range for this position is $239,200 - $270,000, and is part of a competitive total rewards package including stock options and benefits. Individual pay may vary from the target range and is determined by a number of factors including experience, location, internal pay equity, and other relevant business considerations. We review all employee pay and compensation programs annually using Radford Global Compensation Database at minimum to ensure competitive and fair pay.
What We Offer
~3 min readListing Details
- Posted
- March 31, 2026
- First seen
- March 26, 2026
- Last seen
- April 19, 2026
Posting Health
- Days active
- 24
- Repost count
- 0
- Trust Level
- 54%
- Scored at
- April 19, 2026
Signal breakdown
Please let Springhealth66 know you found this job on Jobera.
4 other jobs at Springhealth66
View all →Explore open roles at Springhealth66.
Similar Staff AI Security Engineer jobs
Stay ahead of the market
Get the latest job openings, salary trends, and hiring insights delivered to your inbox every week.
No spam. Unsubscribe at any time.