sourcegraph
October 3, 2023

The company that developed software to detect whether an essay or other written assignment was written by an artificial intelligence or a human got a windfall on the massive success of ChatGPT.

Launched in November last year, ChatGPT quickly grew to 100 million monthly active users by January, the fastest-growing user base ever recorded. The platform is particularly popular with the younger generation, ranging from middle school students to university students.

Since the platform’s launch, about 30 percent of college students said they used ChatGPT for homework, while half said using the system was a form of cheating, the survey found.

AI testing companies such as Winston AI and Turnitin revealed that ChatGPT’s huge success has also benefited technology testing companies, as teachers and employers hope to weed out those who submit computer-generated material produced by humans.

AI compared to gun debate as college students stand at tech crossroads

Krakow, Poland, June 08, 2023 The OpenAI logo on the website is displayed on a mobile phone screen showing ChatGPT from the AppStore. (Jakub Porzycki/NurPhoto via Getty Images)

“It all happened within a week or two. Suddenly, we couldn’t keep up with the demand,” Winston AI co-founder John Renaud told The Guardian.

According to the company, Winston AI is touted as the “most powerful AI content detection solution” on the market, with an accuracy rate of up to 99%. Users can upload written content they want to verify, and within seconds, the system reports whether the material was likely generated by a computer system such as ChatGPT or written by a human.

College students talk AI in class: “Everyone is using CHATGPT”

Winston AI will provide users with “a scale of 0-100, which is the percentage chance that a human or artificial intelligence will generate a copy” and look for potential plagiarism.

Reynolds explained that AI-generated material has “telling” that exposes it as computer-generated, including “confusion” and “bursts.” The company defines perplexity as tracking language patterns in a writing sample and determining whether it follows how the AI ​​system was trained, or whether it appears unique and was written by a human.

Bursting refers to “the occurrence of a group of words and phrases in a text that are repeated in a short period of time”.

Reynolds told Fox Digital News that he believes “the main question and concern with AI detection is whether it becomes undetectable one day.”

“The fundamentals of generative AI work with predictive data,” he explained. “All models, including ChatGPT, Bard, Claude, Stability Text, are trained on large datasets and will return outputs ‘predictable’ by well-built and trained AI detectors. I strongly believe this will be the case, Until there is real AGI (Artificial General Intelligence). But for now, it’s still science fiction.

“So, like generative AI trained on large datasets, we trained our detectors to recognize key patterns in ‘synthetic’ text through deep learning.”

Renaud said he was initially “very worried” about ChatGPT, but his concerns have since eased. AI will always have “information” that other platforms can detect, he said.

Liberty Media’s AI-generated article infuriates, embarrasses employees: ‘F—ING DOGS—T’

“With predictive AI, we will always be able to build a model to predict it,” he told the Guardian.

empty classroom environment

interior of school classroom (iStock)

The Winston AI co-founder said the platform is primarily used for scanning school papers, while “publishers scanning journalists/contributors’ work prior to publication” has gained traction and ranks as the second most used use of the platform.

“Demand for AI detection is likely to grow outside of academia. We have a lot of publishers and employers looking to figure out the originality of the content they publish,” Reynolds added in comments on Fox News Digital.

The chief product officer of Turnitin, another company that detects AI-generated material, recently published a letter to the editor of The Chronicle of Higher Education arguing that AI-generated material is easy to detect.

Educators are exploring AI systems to keep students honest in the age of CHATGPT

Turnitin’s Annie Chechitelli responded to an article by a Columbia University student in the Chronicle of Higher Education that said “no professor or software could recognize” material submitted by students but actually written by a computer.

“In the first month that our AI detection system has been available to educators, we have flagged more than 1.3 million academic submissions, more than 80% of which may have been written by AI, alerting educators to Workers scrutinize submissions and use that information to help them make decisions,” wrote Chichitree.

Students might assume that today’s technology cannot detect AI-generated school assignments, but they are simultaneously betting that tomorrow’s technology won’t detect cheating, she added.

GhatGPT openAI logo

ChatGPT in illustration from May 4, 2023 (Reuters/Dado Ruvic/Illustration)

“Even if you manage to slip past AI detectors or your professors, academic work will always exist, which means you’re not just betting that you’re smart enough, or that your processes are elegant enough, to fool today’s scrutiny—you “We’re betting that there won’t be technology good enough to capture it tomorrow. Not a good bet,” she wrote.

Like Renault, Cecchitelli believes there will always be “clues” to AI materials, and that tech companies hoping to discover computer-generated materials have devised new ways to expose AI-generated materials.

Ivy League university unveils plans to teach students with AI chatbots this fall: An ‘evolution’ of ‘tradition’

“We think there will always be leads,” she told The Guardian. “We’re looking at other ways to demystify it. Right now, we have cases where teachers want students to do something themselves to establish a baseline. Remember, we have 25 years of student data to train our models on.”

Chechitelli said there has also been a surge in Turnitin usage since the release of ChatGPT last year, and that teachers are putting more emphasis on deterring cheating than in previous years.

artificial intelligence

ChatGPT is a generative artificial intelligence that has recently taken the world by storm. (iStock)

“Every year there is a survey of the biggest teaching challenges teachers face. In 2022, ‘preventing students from cheating’ comes in at No. 10,” she said. “Right now, it’s number one.”

In a College Rover survey of college students earlier this year, 36 percent of professors threatened to fail them if they were caught using AI in their courses. Some 29 percent of students surveyed said their university had issued AI guidance, while a majority (60 percent) said they did not think their school should ban AI technology outright.

Concerned that students are increasingly using artificial intelligence to cheat, some universities in the United States have begun embracing the revolutionary technology, bringing it into the classroom to assist with teaching and coursework.

Click here for the Fox News app

For example, Harvard University announced that it will use an AI chatbot to assist in teaching the school’s flagship programming course this fall. David Malan, a professor of computer science at Harvard University, said chatbots would “support students through software and reallocate the most useful resources (humans) to help students who need it most”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *