OpenAI Introduces AI Safety Fellowship with $15,000 Compute Stipend
OpenAI announces AI safety fellowship with $15,000 monthly compute stipend, attracting researchers for advanced AI system safety and alignment.
OpenAI Launches Safety Fellowship Program
OpenAI has announced a new AI safety fellowship program offering participants $3,850 weekly stipends and approximately $15,000 in monthly compute resources, running from September 14, 2026 through February 5, 2027. The pilot initiative aims to attract external researchers, engineers, and practitioners to conduct rigorous research on safety and alignment of advanced AI systems, directly mirroring competitor Anthropic's established fellowship structure.
Program Details and Compensation
The OpenAI Safety Fellowship represents a significant investment in external safety research talent. Participants will receive a weekly stipend of $3,850, which translates to over $200,000 in annualized salary (excluding holidays), with total compensation exceeding $111,000 over the fellowship's five-month duration.
Beyond monetary compensation, the program's centerpiece is access to substantial computational resources. Each fellow will receive approximately $15,000 worth of AI compute per month, a critical resource that reflects OpenAI's recognition that compute has become "a key barometer of cache for leading tech and AI companies." This compute allocation enables fellows to conduct meaningful research without facing the typical infrastructure barriers that constrain independent safety researchers.
The fellowship explicitly seeks applicants focused on safety questions relevant to existing and emerging AI systems. OpenAI has identified several priority research areas for fellows to concentrate on:
- Safety evaluation
- Ethics
- Robustness
- Scalable mitigations
- Privacy-preserving safety methods
- Agentic oversight
- High-severity misuse domains
Strategic Timing and Competitive Context
The announcement arrives during a period of heightened scrutiny around AI safety commitments within the industry. Hours before the fellowship's public unveiling, reports questioned CEO Sam Altman's personal dedication to AI safety, creating a notable backdrop for OpenAI's initiative.
The timing also reflects broader industry competition for safety talent. OpenAI's fellowship structure closely mirrors Anthropic's established "Fellows Program for AI safety research," which announced two new cohorts scheduled for May and July 2026. The compensation packages are virtually identical—both programs offer $3,850 weekly stipends and $15,000 monthly compute allocations—suggesting these have become industry standards for attracting top-tier safety researchers.
Competitive Landscape
The emergence of parallel fellowship programs from OpenAI and Anthropic reflects a maturing recognition that external research collaboration strengthens AI safety ecosystems. Anthropic's earlier program establishment gave it a first-mover advantage in recruiting safety-focused researchers, while OpenAI's announcement demonstrates that multiple organizations now recognize the value of formalizing partnerships with independent researchers rather than relying exclusively on internal teams.
The compute allocation component is particularly significant, as it addresses a structural barrier in AI safety research: computational resources remain concentrated among a handful of frontier labs. By providing $15,000 monthly compute access, these programs enable external researchers to conduct empirical work on real frontier models—research that would otherwise require institutional affiliation or substantial independent funding.
Implications for AI Safety Research
The fellowship programs signal that AI safety has transitioned from a peripheral concern to a central focus area commanding substantial resources. The five-month duration and cohort-based structure suggest these are pilot initiatives designed to test scalability and impact before potential expansion.
For the broader safety research community, these programs represent both opportunity and potential concern. Access to frontier compute and competitive compensation could accelerate safety research progress and diversify the talent pool contributing to alignment work. However, the limited duration and selective cohort size means these programs will reach only a small fraction of researchers interested in safety work.
The programs also implicitly acknowledge that safety research benefits from independence—external researchers bring different perspectives and institutional incentives than internal teams. By structuring these as formal fellowships rather than hiring initiatives, OpenAI and Anthropic preserve researcher autonomy while gaining access to their output.
The OpenAI Safety Fellowship application process will determine whether the program attracts the caliber of researchers necessary to produce meaningful safety advances, and whether the five-month timeline permits substantial research progress on complex alignment questions.



