OpenAI Fails to Deliver Promised Opt-Out Tool for Creators

openAI logo in phone screen

The Article Tells The Story of:

  • Media Manager Delay: OpenAI’s tool for controlling AI training data has been delayed for months, frustrating creators.
  • IP Lawsuits: Creators and media outlets have sued OpenAI over unauthorized use of copyrighted content.
  • Limited Opt-Out: Current opt-out options are inadequate and place the burden on creators.
  • Uncertain Effectiveness: Experts doubt Media Manager will address IP concerns, and legal outcomes remain uncertain.

OpenAI Misses Deadline for Content Control Tool

In May, OpenAI pledged to create a tool allowing creators to control how their works are included in its AI training datasets. Seven months later, the promised tool, called Media Manager, remains unreleased, leaving many creators frustrated and sparking criticism of OpenAI’s approach to intellectual property (IP) issues.

The Media Manager tool was meant to identify copyrighted materials, including text, images, audio, and video, based on creators’ preferences. OpenAI described it as a significant step to address legal challenges and appease creators. However, insiders suggest it was never treated as a priority within the company.

A former employee said, “I don’t think it was a priority. To be honest, I don’t remember anyone working on it.” Meanwhile, Fred von Lohmann, a key legal team member working on the tool, shifted to a part-time consultancy role in October, further delaying progress.

Check Out Latest Article of OpenAI 2024 Event: Watch New ChatGPT Product Launches Live Published on December 14, 2024 – SquaredTech

IP Challenges and Backlash

AI models, such as OpenAI’s ChatGPT and Sora, are trained on vast amounts of data, including web pages, videos, and images. While this enables the models to generate highly realistic outputs, it also raises questions about their use of copyrighted content. For example, ChatGPT has quoted New York Times articles verbatim, and Sora has created clips featuring copyrighted logos and characters.

These practices have upset creators, many of whom claim their works were used without permission. Lawsuits have been filed against OpenAI by authors, artists, and media organizations, accusing the company of illegal training practices. Notable plaintiffs include authors Sarah Silverman and Ta-Nehisi Coates, as well as major media outlets like The New York Times.

OpenAI has pursued licensing deals with select partners, but critics argue the terms are not attractive to most creators.

Lack of Adequate Opt-Out Options

Currently, OpenAI offers limited ways for creators to “opt out” of its AI training. In September, the company introduced a submission form for creators to request removal of their works from future training datasets. Additionally, webmasters can block OpenAI’s web-crawling bots. However, these methods are seen as cumbersome and incomplete.

For example, opting out of image datasets requires creators to submit copies of each image along with detailed descriptions. Written works, videos, and audio recordings lack any specific opt-out mechanisms. Critics argue this approach shifts the burden onto creators rather than offering a comprehensive solution.

The Media Manager tool was announced as a transformative update to OpenAI’s opt-out systems. It was intended to simplify the process and set a new industry standard. Despite these promises, OpenAI has not provided any updates on its development or an expected launch date.

Skepticism Over Effectiveness

Experts doubt whether Media Manager, even if implemented, would address creators’ concerns or resolve the legal questions surrounding AI and IP usage. Adrian Cyhan, an IP attorney, pointed out that even platforms like YouTube and TikTok struggle with content identification at scale. OpenAI’s ability to perform better remains uncertain.

Ed Newton-Rex, founder of Fairly Trained, criticized the tool’s premise, arguing that it unfairly places the responsibility of content protection on creators. “Most creators will never even hear about it, let alone use it,” he said.

Additionally, opt-out systems may fail to account for modifications to works, such as resized or altered images. Joshua Weigensberg, an IP lawyer, noted that creators often do not know where their works appear online, making complete opt-out impossible.

What’s Next?

In the absence of Media Manager, OpenAI has implemented filters to prevent its models from reproducing specific training examples. The company continues to claim fair use protections, arguing that its AI creates transformative outputs rather than direct copies.

Whether courts will side with OpenAI remains to be seen. A legal victory might render Media Manager less relevant from a jurisprudential standpoint. However, the delayed rollout and lack of transparency around Media Manager have tarnished OpenAI’s reputation among creators and raised doubts about its commitment to ethical AI practices.

More News: Artificial Intelligence

Leave a Comment

Your email address will not be published. Required fields are marked *