Medium You don’t want the artificial intelligence models that power ChatGPT or Bard to be trained with the articles your users publish. For this reason, the platform announced the implementation of a blockade against such technologies. The measure is similar to the one they already implemented cnn, Reuters and The New York Times.
Tony Stubblebine, CEO of the platform blogging, argument that this responds to the need to address a crucial issue: establishing limits on what is considered fair use of information publicly available on the web.
“Artificial intelligence companies are making money from your writing without asking for your consent, or offering you compensation or credit. Much more could be asked for, but these ‘3 C’s’ are the minimum,” explained the Medium leader.
Thus, the executive reported that they will do everything possible to block the use of articles published on Medium to train AI models. A position that will be maintained until companies take measures to address the fair use of materials. Stubblebine indicated that the ban they have established is at the general level of the site. And he acknowledged that, while it is far from an ideal or foolproof solution, it is the best they can do for the moment.
“What we need for our writers is a detailed approach that works at the level of individual writers and stories. A more robust protocol would probably look like a search engine’s sitemap, allowing a site to explicitly say what is available for “AI training and what not. Medium would be happy to give writers tools to set these permissions. But first we need some kind of standard.”
Tony Stubblebine, CEO of Medium.
Medium stands up to OpenAI and other artificial intelligence companies
In his announcement, the CEO of Medium mentions that, in principle, they are aware that their new policy it’s hard to enforce. The company has decided to update its terms and conditions to prevent the use of material published on the platform to train AI models without prior written consent. In addition, they will incorporate specific locks in the file robots.txt of the site.
Even so, Stubblebine indicates that OpenAI is the only artificial intelligence company that today allows it to prevent its web crawler—GPTBot, in this case—from extracting the content of the service and using it to train its tools, such as ChatGPT. The other firms have turned a deaf ear to the claims so far. Which does not imply that Sam Altman’s people are the “good guys” of history, since they already have multiple lawsuits and claims for using copyright-protected materials and personal data. scraped from the web without permission.
From Medium they affirm that They cannot be protected by copyright laws either. to prevent AI models from being trained with articles on the site, because it does not cover such a use case. Furthermore, since the writings belong to the users and not to the platform, it cannot assume the legal commitment to battle corporations.
In search of a coalition
The intention of the site blogging is form a coalition with other companies that are in a similar situation. However, Medium states that not all of them are prepared to go out publicly to battle giants like OpenAI, Google or Meta. In fact, Stubblebine assures that if organizations like Wikipedia or Creative Commons got involved in the matter, they could surely obtain positive results faster.
For now, the platform maintains that it does not intend to hinder the development of artificial intelligence. But you also don’t want the value of what your users write to be liquefied just so the technology can be used to generate spam.
What Medium raises is interesting. It is the first time that someone has openly asked for creating a consent standard for fair use of web material to train AI models. It will be interesting to see if the idea gains traction in the immediate future.