Resist hate red logo

Resist Hate Guidance and Policies on Using AI in Our Work

Last updated: December 26, 2025

Transparency (Internal and External)

Externally, as we use AI in our journalism, it is used in draft or outline creation that our writers then create/edit to complete their final article. When an AI tool is used to fully generate audience-facing content on its own, we will tell the audience in ways that both show and educate news consumers.

We will work with editors and designers to create disclosures that are precise in language without being onerous to our audience. This may be a short tagline, a caption or credit, or for something more substantial, an editor’s note. When appropriate, we will include the prompts that were fed into the model to generate the material.

Our transparency works on multiple levels. Internally, it helps conversation and creativity. It will be clear to our peers and our bosses whenever we are using generative AI. This will facilitate collective learning and help us create applicable, transitory policies as the technologies evolve.

Externally, communication and disclosure ideally create opportunities to get feedback from the audience, as well as educate consumers.

As journalists, part of our job is to empower the audience with news literacy skills. AI literacy – understanding how generative AI works, what benefits it brings to the information ecosystem and how to avoid AI-generated misinformation – is a subset of news literacy.

Accuracy and human verification

All information generated by AI requires human verification. Everything we publish will live up to our standards of verification. Increasingly in all our work, it is important to be explicit about how we know facts are facts. This will be particularly important when using AI. For example, an editor should review prompts, and any other inputs used to generate a story or other material, and everything should be replicable.

Every fact stated in an article will include a link to the reputable, authoritative source from which the fact originated. Writers will always add source links where applicable, especially when quoting an individual or news organization.

Audience service

Our work in AI is guided by what will be useful to our audience as we serve them. We made a promise to our audience to provide them with information that reflects our goals, values, and the Resist Hate Mission Statement.

Privacy and security

Our relationship with our audience is rooted in trust and respect. To that end, as we use AI to customize content or develop products that alter the way content is delivered, we will protect our audience’s data following our website’s privacy policy. Our privacy policy forbids entering sensitive or identifying information about users, sources or even our own staff into any generative AI tools.

As technology advances and opportunities to customize content for users arise, we will be explicit about how your data is collected — following our organization’s privacy policy — and how it is used to personalize your experience.

Therefore, we will show any editorial content that was created and distributed based on that personalization.

Exploration

With the four earlier principles as our foundation, we will embrace exploration and experimentation. We should strive to invest in newsroom training — internal or external — so every staff member is knowledgeable in generative AI tools.

Logistics

The point person/team on generative AI in our newsroom is Serena Zehlius, Editor. Coordinate all use of AI here. This team member will also be the source of frequent interim guidance distributed throughout our organization. In the future, as team members are added to the organization, there will a full AI team.

This team will seek input from a variety of roles, particularly those who are directly reporting the news.

Writers should expect to hear at least monthly communication from this team with updates on what we are doing and guidance on what activities are approved.

In addition, members of this team will:

●      Monitor content management systems, photo editing software and business software for updates that may include AI tools. Because software changes quickly and AI is being added to every technology product, it is important to delegate proper team members to stay knowledgeable of updates.

●      Write clear guidance about how we will or will not use AI in content-generation.

●      Edit and complete our AI policy and ensure that it is both internally available and where proper, publicly available (with our other standards and ethics guidelines).

●      Seek input from our audience, through surveys, focus groups and other feedback mechanisms.

●      Manage all disclosures about partnerships, grant funding or licensing from AI companies.

●      Understand our privacy policies and explain how they apply to AI and other product development. This includes regularly consulting with editors, lawyers or other privacy experts that influence newsroom policies.

●      Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.

●      Outline a clear process on how the policy will be updated, as specific as the number of meetings per month of our committee, who is on the committee, etc.

Current editorial use:

All uses of AI should start with journalism-centered intentions and cleared by the appointed AI group. Human verification and supervision are key.

Here are the generative AI sources we currently use:

  • OpenAI/ChatGPT
  • Jasper.ai
  • Copy.ai

Editorial use

Generative AI is permitted for the following purposes:

Research – It is fine to ask a publicly available large language model to research a topic. However, you will want to independently verify every fact. So be wary. It is common for AI to “hallucinate” information, including facts, biographical information, and even newspaper citations.

Headline experimentation – Asking AI to generate headlines is a form of research. The same caveats apply. Also, be sure to put enough facts into the prompt that the headline is based on our journalism and not other reporting.

Summary paragraphs – It is okay to use AI to generate article summaries that appear at the top of our work. Our policy (below) is that we do not enter our content into any large language models.

Searching and assembling data – You are permitted to use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our audience. U

Visuals – Use ChatGPT to create illustrations for publication. All illustrations must have the following credit: Resist Hate (AI-generated)

Do not use AI to manipulate photos unless they are for illustration purposes and clearly defined. Visual journalists need to be aware of software updates to photo processing tools to ensure AI-enhancement is used according to our policies. Do not publish any member-submitted content without first verifying its authenticity.

Fact-checking

●      Use of AI alone is not sufficient for independent fact-checking. Facts should be checked against multiple authoritative sources that have been created, edited, or curated by human beings. A single source is not sufficient; information should be checked against multiple sources.

Social media use

Use of verbatim ChatGPT content is permitted on our social channels.

●      If verbatim ChatGPT content is used, it should be edited for accuracy and voice by a human

●      Audience teams should do regular content audits to ensure social copy/posts meet ethical guidelines

Privacy and security

●      No personal information from staff or readers should be entered into programs

●      None of our intellectual property should be entered into a program, outside of what is outlined above.

●      Staff working with AI tools should have a clear understanding of our organization’s privacy policy.

●      When using generative AI to customize content for audience subsets, AI disclosure should include how user data was used to do so.

Ongoing training

Regular training on AI tools and experiments will be available and at times even mandatory. This training will be delivered by a combination of members from the internal committee and outside experts.

Creating custom GPTs

All custom GPTs must be approved by the AI oversight group. Know that the systems you develop with ChatGPT’s custom GPT program will not be private. Any custom GPT code should be publicly available.

Use reliable sources to train custom GPTs. One of the best ways to create solid and useful output is to limit and control the sources a custom GPT draws on to material we can vouch for. In many cases this will mean limiting our custom GPTs to our own material.

Commitment to audience AI literacy

Along with this AI policy, we have developed an AI literacy page to help our audience understand how and why we are using generative AI. This material will be updated regularly to reflect our most current experimentation. As our language evolves, we will be better able to describe specific AI applications and tools. On the page, we link to resources, articles and other materials that will:

●      Help our audience understand the basics of generative AI.

●      Explain why newsrooms use AI in their work.

●      Build a more robust vocabulary for describing AI.

●      Avoid AI-generated misinformation.

●      Use chatbots responsibly to seek out factual information.

●      And create responsibly using new generative AI tools.

Guidelines for product teams and technologists

Our product/technology team is committed to understanding and staying up to date with all tools, software or companies we use or partner with. We will:

●      Vet third-party vendors and their usage policies before testing any AI product.

●      Make sure any product we use adheres to our own data and privacy policies.

●      Perform comprehensive testing on all software and tools for reliability and accuracy before using them for any consumer-facing content.

●      Ensure all software settings are correct, and in accordance with our policies, before using any LLM.

●      Stay up to date on the latest software updates for products we use.

●      Provide best-practices, documentation, or training for new tools to internal users.

Total Views: 0