← Kembali ke Beranda
⚡ AMP Version

Our Newsroom Ai Policy

Oleh Patinko

Story text

SmallStandardLargeStandardWideStandardOrange

* Subscribers only
  Learn more

Earlier this year, we committed to publishing a reader-facing explanation of how Ars Technica uses, and doesn’t use, generative AI. Translating our internal policy into a reader-facing document that meets our standards for clarity and precision took longer than I’d have d, but I wanted to get it right rather than get it out fast. That document is now live, and you can find it below (and also linked in the footer of most pages on the site).

Our approach comes from two convictions: that AI cannot replace human insight, creativity, and ingenuity, and that these tools, used well, can help professionals do better work. From those starting points, it was always clear what we wouldn’t allow. AI would not become the author, the illustrator, or the videographer. These tools are best used by professionals in the service of their profession, not as a clever end run around it, and certainly not as a path to eventually replacing it.

The short version: Ars Technica is written by humans. Our reporting, analysis, and commentary are human-authored. Where we use AI tools in our workflow, we use them with standards and oversight, and humans make every editorial decision. Our policy covers how we handle text, research, source attribution, images, audio, and video.

These standards aren’t new. They’ve governed our editorial work since AI tooling became available. What’s new is making them visible to you. You deserve to see the rules we hold ourselves to, not just trust that they exist.

The policy will be updated if our practices change in any meaningful way, and any changes will be noted there.


Ars Technica’s policy on generative AI

AI is reshaping how information is produced, and our readers deserve to know where we stand. This is our policy on the use of generative AI in Ars Technica’s editorial work. It applies to all editorial work produced by Ars Technica’s writers, editors, and contributors.

The short version: Ars Technica is written by humans. AI doesn’t write our stories, generate our images, or put words in anyone’s mouth. Where we do use AI tools in our workflow, we use them as we do any other tool: with standards, under supervision, and with humans making every editorial decision.

If there are any changes to our policy, they will be reflected here.

Our journalism is human-authored

Ars Technica’s editorial text is written by humans. We do not use AI to generate our reporting, analysis, or commentary.

When AI output is itself the subject of reporting (for example, examining what a model produces or analyzing a system’s behavior), we may reproduce that output for demonstration or analysis. In those cases, AI-generated material is presented as exemplar material and is set apart visually, with disclosure placed as close to the material as possible.

AI-powered tools may be used to assist with editing and workflow in ways that don’t displace human authorship, including grammar checks, style suggestions, and structural feedback. These tools can recommend changes; only humans can make them.

Research and source material

Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified.

When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

We don’t publish claims based solely on AI-generated summaries, and reporters may not represent any material as “reviewed” unless they have examined it directly.

Every author who uses AI tools in the course of reporting a story must disclose that use to their editors, and authors remain fully responsible for their content.

Images, audio, and video

Our visual content, including listing images, illustrations, and video, is produced by our editorial and art teams or sourced from photography services and wire providers. Our creative team may use AI tools in the production of certain visual material, but the creative direction and editorial judgment are human-driven.

We do not publish AI-generated images, audio, or video as authentic documentation of real events. We do not alter documentary media in ways that change their meaning. Standard production work, color correction, cropping, and contrast adjustments, is fine.

When synthetic media is used in the context of reporting on AI, it will be clearly identified as AI-generated, with that disclosure placed as close to the material as possible.

Accountability is non-negotiable

Anyone who uses AI tools in our editorial workflow is responsible for the accuracy and integrity of the resulting work. This responsibility cannot be transferred to colleagues, editors, or the tools themselves. More broadly, maintaining the standards in this policy is a d obligation across our editorial operation.

These standards have governed our editorial work since AI tooling became available. When violations occur, we take action. We’re publishing this reader-facing version because our readers deserve to see the rules we hold ourselves to, not just trust that they exist.

This policy was last updated April 22, 2026.

Ken Fisher Editor in Chief

Ken is the founder & Editor-in-Chief of Ars Technica. A veteran of the IT industry and a scholar of antiquity, Ken studies the emergence of intellectual property regimes and their effects on culture and innovation.

98 Comments

Sumber Artikel:

Arstechnica.com

Baca Artikel Lengkap di Sumber