Policy & Standards

Our Commitment to Responsible AI in Local Journalism

How Mueller Today uses artificial intelligence to strengthen — not replace — human reporting, and what that means for the stories you read.

Last updated March 26, 2026 · Version 1.0

Why This Policy Exists

Mueller Today uses AI tools to help cover the Mueller neighborhood and greater Austin area. AI lets a small local team publish more frequently, research more deeply, and serve readers in ways that would otherwise require a much larger staff.

But AI introduces real risks — to accuracy, to trust, and to the community relationships that make local journalism matter. This policy exists because our readers deserve to know exactly how we use these tools, where humans remain in control, and what we will never automate away.

We believe AI should make local journalism more thorough and more accessible — never less honest. Every policy decision below is guided by that standard.

Five Founding Principles

These principles govern every decision we make about AI at Mueller Today. When we evaluate new tools, expand into new formats, or face an edge case this document doesn't cover, we return to these five commitments.

1. Transparency First

Readers will always know when AI contributed to a story. We disclose what AI did, what humans did, and why. No story is published without a visible transparency record.

2. Human Authority

A named human editor reviews and approves every piece of content before publication. AI proposes; humans decide. Editorial judgment is never delegated to a machine.

3. Accuracy Over Speed

We treat all AI output as unvetted source material. Every factual claim is independently verified by a human before publication — the same standard we hold for any source.

4. Community Accountability

We serve Mueller and Austin. If our AI-assisted reporting causes harm, we correct it publicly, learn from it openly, and adapt this policy accordingly. Readers can contact us directly with concerns.

5. Fairness and Inclusion

AI systems carry the biases of their training data. We actively review AI output for cultural sensitivity, representational fairness, and potential harm — particularly in coverage of our diverse local community. When AI-generated imagery depicts people or places, a human reviewer evaluates it for accuracy and cultural appropriateness before publication.


How AI Fits Into Our Reporting

Our editorial process is designed so that AI handles mechanical work — scanning sources, structuring data, generating drafts — while humans handle everything that requires judgment, verification, and voice. Here is the pipeline every AI-assisted story follows:

Source Discovery
Automated monitoring identifies potential stories from public sources (government filings, event listings, local outlet reporting).
AI · Automated
Source Approval
A human editor reviews the flagged source, confirms it is credible and newsworthy, and greenlights it for development.
Human · Editorial Gate
Research & Preparation
AI extracts key facts, gathers supplementary context from public records and prior coverage, and organizes everything into a structured briefing.
AI · Automated
Fact Verification
Every factual claim, quote, and data point is independently verified by a named human fact-checker against original sources. Each check is logged with the reviewer's initials and timestamp.
Human · Required
Draft Generation
AI produces an initial draft from verified material using an editorially selected writing style appropriate to the story type.
AI · Automated
Editorial Review & Rewrite
A human editor rewrites for clarity, tone, and AP style. The editor checks for accuracy, fairness, balance, and adherence to our editorial standards. The editor's name is recorded.
Human · Editorial Gate
Image Review
If AI-generated illustrations are used, a human reviewer evaluates them for accuracy, cultural sensitivity, and appropriateness. All AI-generated visuals are labeled as illustrations, never presented as photographs.
Human · Required
Publication
The final article is published with a complete transparency record, named byline, and the "AI-Assisted Reporting" label visible to readers.
Human · Final Approval

Every article's complete editorial trail — including timestamps, reviewer names, and the specific AI and human steps taken — is available to readers through the "See how this story was built" panel on each story.


What AI May and May Not Do

Use Case Status Conditions
Source monitoring & discovery Approved Flagged sources must be reviewed and approved by a human editor before development begins.
Transcription Approved AI transcriptions must be checked against original audio/video by a human before any quotes are used.
Research & background gathering Approved Supplementary sources must be cited. AI-gathered facts require the same verification as any other source.
Data parsing & structuring Approved Public records, event listings, meeting agendas. Output reviewed by a human before use.
Draft generation Approved Only from verified material. Must be rewritten and approved by a named human editor.
Copyediting & style checks Approved Suggestions only; a human editor makes all final decisions.
Translation Approved AI translations must be reviewed by a fluent human speaker before publication.
Social media summaries Approved Generated from published articles only. Reviewed before posting.
SEO optimization Approved Headlines and metadata suggestions. Human editor approves final text.
AI-generated imagery Approved AI-generated visuals are clearly labeled as illustrations in their captions (e.g., "Illustration generated with AI"). Human review for accuracy, cultural sensitivity, and appropriateness is required before publication.
Content personalization Restricted If implemented, must expose readers to a broad range of stories. Must not create filter bubbles or suppress viewpoints. Requires regular bias audits.
Publishing without human review Prohibited No AI-generated or AI-assisted content may be published without review and approval by a named human editor.
Fabricated bylines or sources Prohibited We will never attribute AI-generated content to a fictitious person. Every byline represents a real human who stands behind the work.
Altering photographs, video, or audio Prohibited AI may not be used to alter photographs, video, or audio without clearly disclosing the modification to readers. Unmodified photographs are never presented as AI-generated, and AI-altered media is never presented as original.
Generating deepfakes or synthetic media of real people Prohibited No synthetic imagery, audio, or video that depicts identifiable real individuals.
Replacing staff with AI Prohibited AI augments our team's capabilities. It is not used to eliminate editorial positions or reduce human oversight.

Visual Journalism Standards

Visual content carries the highest risk to audience trust. Readers need to know whether what they're seeing is real or generated, and we treat that distinction as non-negotiable.


Our Transparency Record

Every AI-assisted article on Mueller Today includes:

We don't just tell readers that AI was used. We show them exactly what it did, step by step, with receipts. We believe this level of transparency sets the standard for AI-assisted local journalism.

Data Privacy & Source Protection


Guarding Against Bias

AI systems reflect the biases in their training data. In local journalism, this can manifest as underrepresentation of communities, culturally insensitive language, or skewed framing. We address this through:


Accountability & Governance

Corrections

If an error appears in an AI-assisted article, we follow the same corrections policy as any other story: we correct it promptly, note the correction visibly, and update the transparency record to reflect what happened.

Oversight

An internal editorial committee is responsible for:

Staff Training

All editorial staff receive training on our AI tools, this policy, and the ethical considerations specific to AI-assisted journalism. Training is updated as tools and standards evolve.


Commitments We Make to Readers

We will always:
  • Tell you when AI was involved in creating a story
  • Show you the full editorial trail — AI steps, human steps, fact-checks, timestamps
  • Have a named human editor approve every story before publication
  • Verify every factual claim independently, regardless of its source
  • Label AI-generated images as illustrations, never as photographs
  • Credit the original reporting that informed our coverage
  • Correct mistakes publicly and promptly
  • Update this policy as the technology and our understanding evolve
We will never:
  • Publish AI-generated content without human editorial review
  • Use fake bylines or attribute AI work to fictitious people
  • Alter photographs, video, or audio with AI without clearly disclosing it
  • Create synthetic depictions of real individuals
  • Feed confidential sources or unpublished sensitive information into AI tools
  • Use AI to replace editorial staff
  • Hide the role AI played in any piece of content

Questions, Concerns, or Feedback

Every article includes a feedback button so you can ask questions or share concerns about how AI was used in that specific story — right where you're reading it.

For broader questions about this policy or our AI practices in general:

This policy is a living document. As AI tools evolve and as we learn from our readers, we will update it. We welcome your input.


Standards We Follow

Our policy draws on guidance from leading journalism ethics organizations: