Will give credit: OpenAI 's response to New York Times on replication of articles

Despite OpenAI's commitment to promptly address and rectify any identified issues, the Times failed to provide specific examples.

Author
Edited By: Mayank Kasyap
Follow us:

Pinterest

New Delhi: OpenAI, the trailblazing company behind the innovative AI tool ChatGPT, has responded emphatically to allegations put forth by The New York Times concerning the AI model's alleged replication of articles. In an official blog post, OpenAI counters these claims, viewing the dispute as an opportunity to provide transparency and elucidate its operations and intentions in crafting advanced technology.

Positive discussions turn unexpectedly sour

OpenAI reveals that, initially, discussions with The New York Times were perceived positively, centered around the prospect of collaborating. The envisaged partnership aimed to showcase real-time content from The New York Times through ChatGPT, with due credit accorded. This collaborative effort would serve to broaden the Times' reader base while providing OpenAI users access to the esteemed reporting of The New York Times. OpenAI emphasises that, in the vast array of data used to train their AI, the Times' content held a relatively insignificant role.

Unforeseen legal action raises eyebrows

Surprisingly, on December 27, OpenAI was blindsided by the revelation of a lawsuit filed by The New York Times against them, as reported by the Times itself. OpenAI expresses its surprise and disappointment at the unexpected legal action taken by the Times.

Commitment to address issues goes unacknowledged

During the discussions, The New York Times mentioned instances of ChatGPT reproducing its content. Despite OpenAI's commitment to promptly address and rectify any identified issues, the Times failed to provide specific examples. In July, upon discovering that ChatGPT unintentionally reproduced real-time content, OpenAI swiftly took down the feature to implement necessary fixes.

Allegations of content manipulation

OpenAI finds it intriguing that the duplicated content flagged by the Times appeared to originate from antiquated articles available on various other platforms. OpenAI suspects that the Times may have manipulated the instructions given to ChatGPT, potentially including substantial excerpts from their articles to induce content replication. However, OpenAI asserts that, even with such instructions, their AI typically does not exhibit the behavior suggested by the Times. This raises questions about the possibility of selective examples being presented by the Times.

AI's intended purpose and continuous improvement

OpenAI underlines that the manipulation alleged by the Times is contrary to the intended purpose of their AI, emphasising that it does not aim to replace the valuable work of The New York Times. Nevertheless, OpenAI acknowledges the importance of addressing such issues and affirms its ongoing efforts to enhance systems, citing substantial progress achieved in their recent models.