Greg Brockman
Greg Brockman
Comprehensive biographical profile of OpenAI co-founder Greg Brockman covering his career, research contributions, AI safety views, and controversies including the 2023 board crisis and ongoing Musk litigation; reasonably balanced but footnotes reveal an undisclosed internal research compilation rather than primary source citations.
Quick Assessment
| Attribute | Detail |
|---|---|
| Born | November 29, 1987; Thompson, North Dakota |
| Role | Co-founder and President, OpenAI |
| Education | Harvard University (mathematics); MIT (computer science); dropped out 2010 |
| Previous role | First CTO of Stripe (2013–2015) |
| Known for | Co-founding OpenAI; leading AI infrastructure buildout; "builder-in-chief" |
| AI Safety relevance | Founding mission of safe AGI; involved in alignment and governance discussions |
Key Links
| Source | Link |
|---|---|
| Wikipedia | en.wikipedia.org |
| Wikidata | wikidata.org |
| Twitter/X | twitter.com |
Overview
Greg Brockman is a software engineer, entrepreneur, and co-founder of OpenAI, where he currently serves as president and leads the organization's Scaling group, which focuses on the technical and operational infrastructure underlying OpenAI's AI systems. Born in Thompson, North Dakota in 1987, Brockman was a science prodigy who won a silver medal at the 2006 International Chemistry Olympiad and placed sixth in the 2007 Intel Science Talent Search before going on to study at Harvard and MIT.1 He dropped out of MIT in 2010 to join Stripe as one of its earliest employees, eventually becoming the company's first CTO.2
In December 2015, Brockman co-founded OpenAI alongside Sam Altman, Ilya Sutskever, Elon Musk, and others, with the stated mission of developing artificial general intelligence (AGI) that benefits all of humanity rather than serving narrow corporate interests.3 He played a central role in the organization's early operations—leading recruiting, building internal systems and culture, and acting as a de facto operational leader during periods when Altman remained at Y Combinator. Over the following decade, Brockman contributed to or oversaw the development of key OpenAI systems including OpenAI Gym, Codex, DALL-E, and GPT-4, which he unveiled in a live demonstration on March 14, 2023.4
Brockman took a sabbatical from OpenAI in August 2024—the first extended break he had taken since co-founding the organization—before returning in November 2024 to lead OpenAI's newly formed Scaling group, which is responsible for the company's ambitious infrastructure expansion plans.5 He remains one of the most prominent figures in AI development, recognized on TIME magazine's TIME100 AI list in 2023 and described by observers as a central "builder" figure in the effort to scale AI systems to unprecedented capacity.
Early Life and Education
Brockman grew up in Thompson, North Dakota, where he attended Red River High School. He excelled across multiple disciplines, captaining a state-champion Science Bowl team, participating in drumline, the Latin club, and a math club that volunteered at a local middle school, and writing for the local paper's Teen Page.1 His academic achievements during this period were exceptional: he won a silver medal at the 2006 International Chemistry Olympiad (the first North Dakota finalist in the Intel Science Talent Search since 1973), placed sixth in the 2007 Intel Science Talent Search for a paper titled "Asymptotic Behavior of Certain Ducci Sequences" (later published in Fibonacci Quarterly), was a two-time semi-finalist in the Physics Olympiad, was invited to the Math Olympiad Summer Program, and received a 2006 Prudential Spirit of Community Award Distinguished Finalist honor.1 He also attended Canada/USA Mathcamp in 2003, 2005, and 2007.1
During a gap year after high school, Brockman became seriously interested in programming after reading Alan Turing's essay "Computing Machinery and Intelligence," which he has cited as a formative influence.6 He enrolled at Harvard University in 2008 intending to double-major in mathematics and computer science, where he was involved in the Harvard Computer Society. After one year, he transferred to MIT, where he worked on projects including XVM and scripts.mit.edu and pursued research into programming languages, including static buffer overrun detection.1 He left MIT in 2010 without completing his degree to join Stripe.
Career
Stripe (2010–2015)
Brockman joined Stripe in 2010 as one of its earliest employees—its fourth or fifth, depending on the account—after being connected to the Collison brothers through his MIT network.2 He served in an engineering role before becoming Stripe's first CTO in 2013, a position he held until leaving the company in May 2015. During his tenure, Stripe grew from roughly five to over 200 employees. At the time of his departure, Stripe's valuation exceeded $9 billion.2
Founding OpenAI (2015)
After leaving Stripe, Brockman was introduced to Sam Altman through Stripe co-founder Patrick Collison.7 Altman and Elon Musk proposed creating a nonprofit AI research laboratory dedicated to developing AGI safely and distributing its benefits broadly—motivated in part by concern about corporate consolidation in AI, including Google's acquisition of DeepMind for approximately $500 million in 2014.3 According to accounts of this period, Brockman committed to the project immediately and took on the task of recruiting top AI researchers, including Ilya Sutskever, whom he helped persuade to leave a high-paying position elsewhere.3
OpenAI was formally announced in December 2015 with eleven co-founders: Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba.3 Initial funding commitments totaled $1 billion from Musk, Altman, Peter Thiel, and others.3 In the early period, Brockman operated largely as a de facto CEO while Altman remained at Y Combinator, building systems and culture for the research team from his living room.3
OpenAI (2015–present)
Brockman served as OpenAI's first CTO and has held the title of President. In the organization's early years, he led or contributed to projects including OpenAI Gym (a reinforcement learning toolkit), OpenAI Five (a Dota 2 playing AI system), and the development of GPT-2, whose release in February 2019 was initially staged due to concerns about potential misuse.4 He is listed as a co-author on research papers introducing Codex (a code-generating system based on GPT), the Whisper speech recognition system, the OpenAI o1 system card, and other work.8
Brockman was removed from OpenAI's board as chairman during the November 2023 events surrounding Sam Altman's brief firing. He resigned as president at the same time and briefly joined Microsoft alongside Altman before returning to OpenAI following Altman's reinstatement.9 He took a sabbatical beginning in August 2024, describing it as his first opportunity to rest since co-founding the organization. He returned in November 2024 to lead a new "Scaling" group focused on chips, data centers, software, and operational challenges associated with rapidly expanding AI infrastructure.5
Research Contributions
Brockman's early academic research, conducted during high school and the years immediately following, focused on combinatorics and mathematical sequences. His 2007 Intel Science Talent Search paper on the asymptotic behavior of Ducci sequences was published in Fibonacci Quarterly. He later co-authored work on universal cycles for labeled graphs (published in the Electronic Journal of Combinatorics, 2010) and on omnisequences and coupon collection problems (2011).8
At OpenAI, Brockman has been listed as a co-author on a number of significant research publications, including:
- OpenAI Gym (2016) — a toolkit for developing and comparing reinforcement learning algorithms
- Evaluating Large Language Models Trained on Code (2021) — introducing Codex, a GPT-based system fine-tuned on GitHub code for Python code generation
- Robust Speech Recognition via Large-Scale Weak Supervision (2022/2023) — introducing the Whisper model, presented at ICML 2023
- OpenAI o1 System Card (2024) — documenting safety evaluations of the o1 model
- Systems and Algorithms for Convolutional Multi-Hybrid Language Models at Scale (2025) — focusing on scalable language model architectures8
His role in these publications has generally been as a contributor to the broader research enterprise rather than as a primary technical lead on specific modeling innovations.
Views on AI and AGI
Brockman has expressed consistent beliefs about the trajectory of AI development and OpenAI's role within it. He has described AGI—which OpenAI defines as an autonomous system capable of outperforming humans at most economically valuable work—as something that will fundamentally transform civilization if realized, and views pursuing it as a continuous process rather than a discrete event.10
A central technical belief he has expressed is confidence in scaling laws: the empirical observation that larger models, more training data, and greater compute tend to yield predictable improvements in AI performance. In recorded interviews, he has stated that there is no sign of a plateau in these trends, and that execution—building the infrastructure needed to continue scaling—is the primary challenge facing OpenAI.10
On AI development strategy, Brockman has advocated for iterative, customer-facing deployment as a safer alternative to developing systems privately and releasing them in a single large step. He has pointed to the deployment of systems like ChatGPT as enabling real-world testing and feedback loops that improve both safety and capability.11 He has also spoken about the importance of building AI systems that are auditable and whose access to organizational infrastructure can be monitored.10
Brockman has expressed support for some form of international governance for superintelligent AI. A 2023 statement co-authored by OpenAI leadership, including Brockman, proposed coordination mechanisms for superintelligence analogous to the International Atomic Energy Agency, including the possibility of inspections and audits of AI development efforts.11
AI Safety Engagement
Brockman has publicly engaged with questions of AI safety and existential risk, though the depth and consistency of this engagement has been questioned by critics. He participated in a 2022 panel on AI safety alongside Elon Musk, Max Tegmark, and Benjamin Netanyahu, where he stated that his motivation for working on AI is ensuring that the technology is beneficial rather than harmful to humanity.11 He has referenced Eliezer Yudkowsky's work and science fiction as part of how OpenAI's leadership has thought about AI risk.11
On the technical side, Brockman has described OpenAI's post-training process as a form of alignment: starting from base models trained on broad human-generated text, then refining model behavior toward specified values through a model specification document and related techniques.11
When superalignment co-leader Jan Leike departed OpenAI in 2024 criticizing the organization's safety culture as having taken a back seat to product development, Brockman and Altman responded by emphasizing what they characterized as the harmony between safety and capabilities research. Critics noted that the response was vague and that the superalignment team had been disbanded.11
Political Activity
Brockman and his wife have made political contributions in recent years, and he has publicly supported U.S. AI competitiveness. In early 2025, he posted on X thanking the Trump administration for engaging the AI community and supporting U.S. AI leadership.12 Reports indicate that Brockman is among those co-leading a super-PAC network called "Leading the Future," alongside investors affiliated with Andreessen Horowitz, intended to fund candidates in the 2026 midterm elections who support industry-friendly AI policies.12 He has cited personal motivations including the role of AI in healthcare, referencing his wife's experiences.12
A separate account describes a $25 million donation attributed to Brockman and his wife to causes associated with the Trump political movement, which an OpenAI executive reportedly cited internally to counter accusations that OpenAI was ideologically "woke."13
Criticisms and Controversies
2023 Board Events
During the November 2023 events in which the OpenAI board briefly fired Sam Altman, Brockman was removed as board chairman via a Google Meet call from which he had been excluded.9 He subsequently resigned as president and joined Microsoft alongside Altman. The two returned to OpenAI after Altman was reinstated. Approximately 95% of OpenAI employees signed a petition requesting Altman's reinstatement, according to accounts of the period.9 Brockman described the five-day period in a later podcast as instilling a sense of mortality and ultimately as a rejuvenating experience for the organization.9
Reports from this period also described allegations, attributed to Ilya Sutskever, of bullying by Brockman, and suggestions that his close relationship with Altman made it difficult for other executives, including Mira Murati, to operate effectively.14 Some employee groups reportedly requested that leadership take steps to address his behavior.14
Elon Musk Lawsuit
Personal diary entries and notes written by Brockman in 2017 were unsealed in the context of Elon Musk's federal lawsuit against OpenAI, Altman, Brockman, Microsoft, and Satya Nadella (set for trial in April 2026 in Oakland). The documents, as characterized by reporting on the lawsuit, include entries questioning Musk's leadership and contemplating a transition from nonprofit to for-profit structure.15 One entry reportedly states that if OpenAI committed to nonprofit status and then moved to a B-corp structure three months later, that would constitute a lie.15 Another note written after a November 2017 meeting with Musk reportedly characterized converting to for-profit without Musk's involvement in particular terms.15 Musk's lawsuit alleges that OpenAI's transition to a for-profit entity effectively laundered his charitable contributions—totaling more than $38 million—into a company that reached a valuation of approximately $500 billion.15
OpenAI and its leadership have disputed Musk's characterization of events, and the litigation remains ongoing.
Safety Culture Concerns
The 2024 departure of Jan Leike, who had co-led OpenAI's superalignment effort, brought broader criticism of OpenAI's safety culture that indirectly implicated Brockman as part of the leadership team. Leike's public statement characterized OpenAI's safety culture as having been deprioritized relative to product development. Brockman and Altman's responses were criticized by observers as insufficiently specific.11 AI pioneer Geoffrey Hinton separately described a roughly 50% probability of AI systems posing existential risks within the coming decades.11
Key Uncertainties
- The extent to which Brockman's views on AI safety reflect deep technical conviction versus organizational positioning is difficult to assess from public statements alone.
- The outcome of the Musk v. OpenAI litigation, and what additional documents may reveal about the founding-era decision-making in which Brockman participated, remains uncertain pending the 2026 trial.
- OpenAI's $1.4 trillion infrastructure commitment represents a very large bet relative to the company's current revenue base of approximately $13 billion annually; whether this buildout will proceed as planned and what role Brockman will play in it remains to be seen.
- Brockman's dual role as OpenAI president and investor in more than 30 startups creates potential conflicts of interest that have not been extensively documented or adjudicated publicly.
Sources
Footnotes
-
Background on Brockman's early life, education, and high school achievements — research data compiled from multiple biographical sources including Society for Science records and profiles of Brockman's upbringing in North Dakota. ↩ ↩2 ↩3 ↩4 ↩5
-
Career history at Stripe — research data from biographical profiles describing Brockman's tenure as Stripe's fourth employee and first CTO from 2013 to 2015. ↩ ↩2 ↩3
-
OpenAI founding history — research data from accounts of the founding of OpenAI in December 2015, including the roles of Musk, Altman, Sutskever, and Brockman, and the initial $1 billion funding commitment. ↩ ↩2 ↩3 ↩4 ↩5 ↩6
-
Key OpenAI milestones including GPT-4 demo — research data from profiles of Brockman's work at OpenAI covering GPT-2 release staging, GPT-4 demo on March 14, 2023, and other projects. ↩ ↩2
-
Sabbatical and return — research data from coverage of Brockman's August 2024 sabbatical and November 2024 return to lead OpenAI's Scaling group. ↩ ↩2
-
Turing essay as inspiration — research data from multiple biographical profiles citing Brockman's gap year reading of Turing's "Computing Machinery and Intelligence" as a formative influence on his interest in AI. ↩
-
Introduction to Altman via Collison — research data from accounts describing Patrick Collison introducing Brockman to Sam Altman after Brockman left Stripe. ↩
-
Research publications — research data from DBLP and Semantic Scholar entries listing Brockman's co-authorship on OpenAI Gym (arXiv:1606.01540), Codex (arXiv:2107.03374), Whisper (arXiv:2212.04356, ICML 2023), OpenAI o1 System Card (arXiv:2412.16720), and a 2025 architecture paper (arXiv:2503.01868), as well as early mathematical publications in Fibonacci Quarterly and Electronic Journal of Combinatorics. ↩ ↩2 ↩3
-
2023 board events — research data from coverage of the November 2023 OpenAI board crisis, Brockman's removal as chairman, his resignation and brief Microsoft stint, and his later account of the period in a podcast. ↩ ↩2 ↩3 ↩4
-
Views on AGI and scaling laws — research data from interviews and profiles in which Brockman described his beliefs about AGI, scaling laws, infrastructure needs, and iterative deployment strategy. ↩ ↩2 ↩3
-
AI safety engagement and 2024 superalignment controversy — research data from coverage of Brockman's 2022 safety panel participation, OpenAI's 2023 superintelligence governance statement, Jan Leike's 2024 departure, and related commentary from Geoffrey Hinton. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8
-
Political contributions and super-PAC — research data from coverage of Brockman's 2025 posts thanking the Trump administration and reports of his involvement in the "Leading the Future" super-PAC network. ↩ ↩2 ↩3
-
$25 million donation — research data from a report citing an OpenAI executive referencing Brockman and his wife's donation to Trump-associated causes. ↩
-
Bullying allegations and executive friction — research data from coverage of the 2023 board events including descriptions of allegations attributed to Ilya Sutskever and reports of employee concerns about Brockman's conduct. ↩ ↩2
-
Musk v. OpenAI lawsuit and unsealed documents — research data from reporting on unsealed discovery materials in Elon Musk's federal lawsuit, including Brockman diary entries from November 2017, with trial scheduled for April 2026 in Oakland. ↩ ↩2 ↩3 ↩4