OpenAI has announced plans to introduce advertising in ChatGPT in the United States. Ads will appear on the free version, as well as the low-cost Go Tier, but not for Pro, Business, or Enterprise subscribers.

The company says it will clearly separate ads from chatbot responses and ensure they do not influence outputs. It has also pledged not to sell user conversations, to let users turn off personalized ads, as well as to avoid ads for users under 18 or around sensitive topics like health and politics.
Still, the move has raised concerns among some users. The key question is whether the voluntary safeguards of AI will hold once advertising becomes central to its business.
We’ve seen this before. Social media platforms struggled to turn huge audiences into profit fifteen years ago.
The breakthrough came with targeted advertising: tailoring ads to what users search for, click on, as well as pay attention to. Reshaping their services so they maximized user engagement, this model became the dominant revenue source for Google and Facebook.
Large-scale AI or artificial intelligence is extremely expensive. Training and running advanced model requires huge data centres, specialized chips, as well as constant engineering. A lot of AI firms still operate at a loss in spite of quick user growth. OpenAI alone expects to burn US$115 billion over the next five years.
Only a few companies can absorb these costs. A scalable revenue model is urgent and targeted advertising is the obvious answer for most AI providers. It remains the most reliable way to profit from large audiences.
According to OpenAI, it will keep ads separate from answers and protect user privacy. These assurances may sound comforting. However, they rest on vague and easily reinterpreted commitments for now.
The company proposes not to show ads near sensitive or regulated topics such as health, mental health, or politics. However, it provides little clarity on what counts as sensitive, how broadly health will be defined, or who decides the boundaries.