Caroline Bishop
Apr 08, 2026 17:45
OpenAI publicizes new fellowship program for exterior researchers centered on AI security and alignment, working September 2026 by way of February 2027.
OpenAI is opening its doorways to outdoors researchers with a brand new Security Fellowship program aimed toward advancing impartial work on AI alignment and security challenges. Purposes are actually open, with a Might 3 deadline.
The five-month program runs from September 14, 2026 by way of February 5, 2027, concentrating on researchers, engineers, and practitioners who need to deal with security questions affecting each present and future AI techniques. OpenAI has partnered with Constellation to offer workspace in Berkeley, although distant participation is an choice.
What OpenAI Desires
The corporate outlined precedence analysis areas together with security analysis, ethics, robustness, scalable mitigations, privacy-preserving security strategies, agentic oversight, and high-severity misuse domains. They’re particularly in search of work that is “empirically grounded, technically sturdy, and related to the broader analysis group.”
Fellows will not get inner system entry—a notable limitation—however will obtain API credit, compute assist, a month-to-month stipend, and mentorship from OpenAI workers. The expectation is obvious: produce one thing tangible by program’s finish, whether or not that is a analysis paper, benchmark, or dataset.
Who Ought to Apply
OpenAI is casting a large internet on backgrounds. Pc science is clear, however they’re additionally welcoming candidates from social science, cybersecurity, privateness, and human-computer interplay fields. The corporate explicitly acknowledged they “prioritize analysis skill, technical judgment, and execution over particular credentials.”
Letters of reference are required. Profitable candidates will likely be notified by July 25.
The Greater Image
This fellowship arrives as AI security considerations have moved from tutorial debate to mainstream regulatory dialogue. OpenAI has confronted criticism over time for allegedly deprioritizing security analysis in favor of functionality improvement—a stress that led to high-profile departures from its security workforce.
This system represents an try to domesticate exterior security analysis expertise whereas probably deflecting a few of that criticism. Whether or not it alerts a real shift in priorities or serves primarily as an optics play stays to be seen.
For researchers taken with AI security work with entry to OpenAI assets and mentorship, functions shut Might 3 on the program’s official web page. Questions might be directed to openaifellows@constellation.org.
Picture supply: Shutterstock

