One top organization is reportedly paying for a majority of anti-AI content pushed by top news outlets.
OpenAI is publicly pushing back against critics and news organizations by accusing anti-AI nonprofit groups of indirectly paying for coverage that portrays the company poorly. The dispute came into focus after an NBC News story about OpenAI’s alleged use of legal tactics against some of the company’s detractors. According to a new report, the story’s author was funded through a fellowship sponsored by the Tarbell Center for AI Journalism, a nonprofit backed by groups dedicated to raising awareness about AI risks. OpenAI privately complained to the newsroom that the arrangement created a potential ideological bias in coverage.
According to the report, OpenAI criticized a group of nonprofits that fund journalism projects explicitly focused on scrutinizing artificial intelligence. Those groups, including the Tarbell Center, provide fellowships and grants to reporters working at mainstream outlets such as NBC News, The New York Times, and The Washington Post. OpenAI has argued that while the funding is disclosed, it creates a structure that favors “alarmist” coverage. Specifically, Tarbell is heavily funded by the Future of Life institute, which is almost exclusively anti-artificial intelligence in nature.
Critics of the nonprofit-backed coverage dispute OpenAI’s characterization, calling the alleged panic over artificial intelligence “manufactured” by the industry itself to deflect scrutiny. Media and journalism groups argue that fellowships and grants are a longstanding feature of investigative reporting and do not dictate editorial outcomes. They state that heightened concern over AI’s risks reflects genuine public interest, and say OpenAI’s complaints amount to an effort to discredit unfavorable reporting. Others argue against the “morally superior” nature of the anti-AI movement, and defend OpenAI and other industry giants from certain criticisms.
Supporters of AI regulation say the debate is truly a deeper concern that technology companies fail to accurately depict and report on the risks of long-term artificial intelligence investment and growth. Critics point to recent moves from OpenAI into the pornography industry as an example of the company failing to understand the risks of artificial intelligence. Others argue dismissing such scrutiny risks sidelining important questions at a time when AI systems are being integrated at a higher level. Supporters on the OpenAI board of directors attempted unsuccessfully to remove Sam Altman, the company’s founder and current CEO.
But news organizations facing questions about funding arrangements have defended their reporting standards, saying editorial decisions remain independent regardless of grant support. At the same time, AI companies warn that what they see as one-sided coverage could shape public perception and policymaking in ways that slow innovation. In recent moves, President Trump continues to push pro-AI policies in an effort to speed up innovation and entrepreneurship. The Trump administration in the last year announced multiple new investments into AI data centers, partnerships with Nvidia and other chip makers, and support for OpenAI’s ChatGPT to be used in medical settings. The president also took action to limit state’s abilities to independently regulate AI, pushing for broader, federally backed policies.

