.
Last month, we asked the Medium community what you think about AI-generated writing, and many of you responded. We received hundreds of comments and emails expressing a wide range of perspectives, from excitement over the potential uses of these new tools to deep concern about the impact they’ll have on a writing platform like Medium, which prizes human knowledge and experience.
First, thank you for that thoughtful feedback. It’s clear this is an issue on a lot of people’s minds, and it’s an important one. This is a moment of huge transformation in the digital world, and the potential implications are both wide-reaching and still not well-defined. But they’re also not abstract: AI-generated content is here now, and it’s important to start wrestling with that impact now too, even as the landscape is still taking shape.
There were many responses that referenced the need for transparency and disclosure. As Amanda Laughtland wrote in response to our post, “If people are going to use AI-generated content, I hope it will be identified as such.”
In addition to taking in your feedback, we’ve reached out to different platforms and some of the companies behind recent AI advances to better understand where things are headed and how others in our industry are starting to respond. All of this has helped us in thinking through our own approach as a company who values both technological advancements and human knowledge, and deciding what action to take.
The clear first step for us around AI-generated content is related to transparency and disclosure, and so we’ve updated our distribution standards to include an AI-specific guideline:
We welcome the responsible use of AI-assistive technology on Medium. To promote transparency, and help set reader expectations, we require that any story created with AI assistance be clearly labeled as such.
There are a few reasons we settled on this initial approach — and I do want to underscore that this is just our initial approach; as this technology and its use continue to evolve, our policies may, too. We believe that creating a culture of disclosure, where the shared expectation of good citizenship is that AI-generated content is disclosed, empowers readers. It allows them to choose their own reaction to, and engagement with, this kind of work, and clearly understand whether a story is machine- or human-written.
Additionally, we recognize that there are new horizons and possibilities with this technology — some assistive, thoughtful, and genuinely creative. We’re staying open to its possible uses.
So for now, when we encounter content that we believe is AI-generated but not disclosed, we won’t distribute it across Medium’s network. We may revisit this decision down the road, but for now, asking for disclosure feels like the right first step.
There is room for multiple approaches on Medium, however — while we’re thinking about it from the platform’s perspective, publication editors have been working on guidelines that best suit their needs and readership, which are in many cases more detailed and specific, and many of them prohibit AI writing entirely. Some examples:
Towards Data Science, “A Note about AI-Generated Text”
We’re committed to publishing work by human authors only, and why we don’t — and won’t — accept posts written in whole or in part by AI tools.
This comment has been removed by the author.
ReplyDelete