There’s a Unreleased OpenAI Tool to Catch Students Cheating With ChatGPT. How To Overpass This?

AI GENRATED CHATGPT IMAGE

Unreleased OpenAI Tool, a leading AI research lab, has developed a tool capable of detecting AI-generated text with 99.9% accuracy. Despite growing concerns about academic cheating using ChatGPT, the tool remains unreleased. The internal debates at OpenAI over the past two years highlight the complexities and ethical considerations surrounding the deployment of such technology.

The Development and Delay of OpenAI’s Detection Tool

OpenAI has developed a method to reliably identify text written by AI, such as essays and research papers created using ChatGPT,. This technology has been debated internally for two years and has been ready for release for about a year. However, the tool remains on hold due to internal disagreements about transparency and user retention. Some employees worry that the tool could disproportionately impact non-native English speakers and could drive away users.

Transparency vs. User Retention

The internal conflict at OpenAI centers on balancing the company’s commitment to transparency with the desire to attract and retain users. A survey conducted by OpenAI found that nearly a third of loyal ChatGPT, users would be turned off by anti-cheating technology. This potential backlash has made the company cautious about releasing the tool.

Concerns and Promises

OpenAI’s spokeswoman stated that the text watermarking method is technically promising but carries significant risks. The company is researching alternatives and taking a deliberate approach to address the complexities and potential impacts on the broader ecosystem. Supporters within the company argue that the benefits of such technology, particularly in preventing academic dishonesty, far outweigh the risks.

The Growing Problem of AI-Generated Cheating

Generative AI, like ChatGPT,, can produce complete essays and research papers in seconds. This capability has led to increasing misuse among students. Teachers and professors are desperate for solutions to curb this trend. Alexa Gutterman, a high school English and journalism teacher in New York City, notes that the issue is widely discussed among educators.

A survey by the Center for Democracy & Technology found that 59% of middle and high school teachers believe some students have used AI for schoolwork, up 17 points from the previous year. This growing concern highlights the urgent need for effective detection tools.

The Technical Details and Challenges

ChatGPT’s AI system predicts the next word or word fragment (token) in a sentence. The anti-cheating tool would alter token selection, leaving a watermark pattern detectable by OpenAI’s technology. This watermark is 99.9% effective when enough new text is created by ChatGPT. However, concerns remain about the watermark’s potential to be circumvented through techniques like translation or adding and deleting emojis.

Distribution and Access Challenges

There is broad agreement within OpenAI that determining who can use the detection tool is challenging. Limited access could render the tool ineffective, while widespread access might lead to bad actors deciphering the watermarking technique. Discussions include providing the detector to educators or third-party companies that assist schools in identifying AI-generated and plagiarized work.

Google’s Approach and OpenAI’s Priorities

Google has developed a similar watermarking tool called SynthID, currently in beta testing. OpenAI has prioritized audio and visual watermarking over text due to the more significant potential harms, especially during a busy election year in the U.S. In January 2023, OpenAI released an algorithm to detect AI-written text, which was only 26% effective and was pulled after seven months.

The Role of Educators

Many educators have turned to outside detection tools, which can sometimes fail to identify text written by advanced AI models and may produce false positives. Mike Kentz, an AI consultant for educators, notes that students have become aware of the limitations of these tools. Some teachers encourage students to use AI for research or feedback but draw the line at outsourcing entire assignments to AI.

Innovative Approaches to Combat Cheating

Josh McCrain, a political science professor at the University of Utah, has taken creative steps to catch AI-generated work. By embedding hidden instructions in assignments, he can identify students who rely too heavily on AI. McCrain is now focusing on assignments involving current events, which AI may be less familiar with, to discourage cheating.

Internal Debates and Future Plans

Discussions about the watermarking tool began before the launch of ChatGPT in November 2022. Scott Aaronson, a computer science professor on leave from the University of Texas, developed the tool. In early 2023,OpenAI co-founder John Schulman outlined the tool’s pros and cons, leading executives to seek broader input.

Over the past year and a half, OpenAI executives have repeatedly discussed the technology, seeking fresh data to guide their decision. A survey in April 2023 showed broad public support for an AI detection tool, but concerns about false accusations and user retention remain.

Conclusion

OpenAI’s internal debates highlight the complexities of deploying an effective AI detection tool. As the company navigates these challenges, educators and students await solutions to address the growing problem of AI-generated cheating. The outcome of these discussions could significantly impact the future of AI in education and beyond.

More Updates: Artificial Intelligence

Leave a Comment

Your email address will not be published. Required fields are marked *