
Internal tensions are brewing at OpenAI over the company’s approach to publishing economic research about artificial intelligence, according to multiple sources with knowledge of the situation. The AI giant appears to be increasingly selective about releasing studies that highlight potential negative economic impacts of its technology, leading to staff departures and raising questions about research independence.
Research Team Departures Signal Deeper Issues
At least two members of OpenAI’s economic research team have left the company in recent months amid concerns about research constraints. Most notably, Tom Cunningham exited in September 2024 after concluding that publishing rigorous, objective analysis had become increasingly difficult. In his internal farewell message, Cunningham reportedly expressed frustration over growing pressure to function as an advocacy arm for the company rather than conducting independent economic research.
This follows a pattern of similar concerns. Miles Brundage, OpenAI’s former head of policy research, departed in October 2024, stating that the company’s high profile made it challenging to publish on important topics. He specifically mentioned that while some constraints were expected, OpenAI had become too restrictive in his view.
Management Response and Organizational Structure
Following Cunningham’s departure, OpenAI’s chief strategy officer Jason Kwon addressed these concerns in an internal memo. According to documents obtained from sources, Kwon emphasized that as the leading AI developer, OpenAI must balance identifying problems with building solutions. He wrote: “My POV on hard subjects is not that we shouldn’t talk about them. Rather, because we are not just a research institution, but also an actor in the world that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes.”
The economic research team is currently led by Aaron Chatterji, who was hired as chief economist in 2023. Sources indicate Chatterji reports to Chris Lehane, OpenAI’s chief global affairs officer – a reporting structure that reflects how closely the research function is integrated with the company’s political and policy strategy. Lehane’s background includes political crisis management and corporate advocacy, having previously worked for Airbnb and the Clinton administration.
Shifting Research Priorities
While OpenAI has historically published significant research on AI’s potential economic impacts – including the widely-cited 2023 paper “GPTs Are GPTs” examining automation vulnerability across sectors – sources claim the company now appears more selective. Two individuals familiar with the matter allege that OpenAI has become increasingly hesitant to release work highlighting economic downsides like job displacement, instead favoring research that presents positive findings.
This shift coincides with OpenAI’s expanding commercial partnerships worth billions of dollars with major corporations and governments. A recent example of the company’s current research direction is a report surveying enterprise users who claimed OpenAI’s AI products saved them 40-60 minutes daily, emphasizing that companies have “significant headroom” to increase AI adoption.
Industry Context and Political Considerations
OpenAI’s apparent caution comes amid complex political dynamics surrounding AI. While the Trump administration has generally championed AI development, public concern about job displacement remains significant. A November survey from Harvard Kennedy School’s Institute of Politics found approximately 44 percent of young Americans fear AI will reduce job opportunities.
The approach contrasts with competitor Anthropic, whose CEO Dario Amodei has openly warned that AI could automate up to half of entry-level white-collar jobs by 2030. Such warnings have drawn criticism from White House special adviser for AI David Sacks, who characterized Anthropic’s statements as a “sophisticated regulatory capture strategy based on fear-mongering.”
Self-Regulation in an Emerging Industry
The situation highlights a broader industry dynamic where leading AI labs wield significant authority to self-report the risks and capabilities of their technology. Silicon Valley companies have invested heavily in lobbying against proposed state-level AI regulations, with campaigns reportedly reaching $100 million.
OpenAI spokesperson Rob Friedlander defended the company’s research approach in a statement, saying: “The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves.”
As AI continues to advance rapidly, the tension between corporate interests and research transparency appears likely to remain a critical issue not just for OpenAI, but for the entire industry as it navigates increasing public and regulatory scrutiny.
