How Should Newsrooms Handle AI? Here’s a Guideline for Making Guidelines

With the rapid rise of generative AI, newsrooms everywhere are facing a new question: how should these technologies be used — and where should we draw the line? To help navigate this space, several media organizations have already published their own sets of guidelines. In this post, we’ll explore the common themes across these policies and offer tips for newsrooms developing their own.

Our analysis draws from 21 published AI guidelines, mainly from news organizations across Europe and the U.S., with a handful from other regions, and if your newsroom has its own policy, we’d love to hear from you. This list (and our analysis) will continue to evolve as more organizations weigh in.

The guidelines vary in tone and format — some are titled as protocols, others as principles or charters. Some take a more cautious, even restrictive approach, explicitly banning certain uses of AI. Others lean toward responsible governance, emphasizing transparency, accountability, and efforts to minimize harm.

In the first section of our analysis, we break down the main trends and outliers in existing policies. In the second, we offer a “how-to” for creating your own AI guidelines — a guide to the guardrails, if you will. Whether you’re a journalist looking to understand the landscape or a newsroom leader drafting your policy, this meta-guide aims to give you a helpful starting point.

1. Human Oversight Is Non-Negotiable

AI outputs must always be reviewed and approved by humans before publication. Newsrooms emphasize that decision-making should remain with people, not machines (e.g., CBC, Reuters, The Guardian, VG).


2. Transparency Through Clear Labeling

When AI is used to generate content, it must be clearly labeled for audiences. However, transparency is less enforced when AI is only used as a supporting tool (e.g., Aftonbladet, VG, Reuters, CBC).


3. Clear Boundaries: Banned vs Allowed Uses

Most outlets ban AI from writing full stories, generating photorealistic images, or fact-checking, but allow uses like summarization, idea generation, and support with social media or graphics (e.g., Wired, Insider, Nucleo).


4. Strong Accountability Measures

Organizations remain fully responsible for any AI-assisted content they publish and emphasize use of secure, technically sound AI systems (e.g., Aftonbladet, Reuters, Financial Times, DPA).


5. Protecting Privacy and Confidentiality

Sensitive data, unpublished content, or source information must never be input into external AI platforms (e.g., VG, Aftonbladet, CBC, Ringier).


6. Encourage Cautious Experimentation

Newsrooms support exploring AI tools with curiosity and caution, while remaining vigilant about misinformation risks and source verification (e.g., ANP, STT, Financial Times).


7. AI as a Strategic Tool, Not a Replacement

AI is seen as a way to enhance speed, originality, and quality, not as a substitute for human journalism (e.g., The Guardian, Insider, Le Parisien, Heidi.News).


8. Training Is Emerging but Limited

A few organizations call for AI literacy training to ensure responsible use, especially to mitigate risks like misinformation or bias (e.g., Financial Times, DJV, Mediahuis).


9. Recognizing and Guarding Against Bias

Some organizations explicitly commit to identifying and avoiding biases embedded in AI models and outputs (e.g., The Guardian, Mediahuis, Ringier).


10. Guidelines Are Evolving

Most media outlets acknowledge these are early-stage policies that will be revised as AI technologies and industry practices mature (e.g., Wired, CBC, De Volkskrant, Ringier).