Nine months after the chatbot sent schools into a panic, colleges are slowly learning to live with AI.
(Bloomberg) — ChatGPT set the academic world ablaze after it was introduced in November, when the AI chatbot suddenly gave students a hard-to-detect shortcut for completing essays and assignments. Nine months later, as a new school year nears, many universities are still crafting their response.
Colleges around the world spent much of the previous academic year adopting ad hoc approaches to the software — or no policy at all. Some professors banned the use of it outright, citing plagiarism, while others looked to incorporate it more intentionally into their curriculum. That led to inconsistent approaches across classes and departments.
The situation is only slowly changing now: Without clear guidelines that apply to various departments, universities risk repeating the free-for-all they experienced during 2023 final exams. But many are realizing they need to find a way to live with artificial intelligence.
“It’s moving so quickly,” said Eric Fournier, director of educational development at Washington University in St. Louis. ChatGPT reached 100 million users in under two months, leaving academic officials in the dark as students latched on to the technology. “It went from curiosity to panic to a grudging acceptance that these tools are here,” he said.
From the outset, professors suspected that students were cheating, said Madison White, a student at Stetson University. “Without professors fully looking into the software, they often immediately assumed that it was a hack for students to get away from doing readings or homework.”
Generative AI tools like ChatGPT, developed by the Microsoft Corp.-backed startup OpenAI, are fed vast amounts of data and then use that training to answer users’ queries — often with eerie accuracy. The software represents one of the biggest shifts in the tech world in decades, bringing a trillion-dollar opportunity, which makes it all the harder for schools to ban or ignore it.
But professors and administrators seeking to integrate generative AI into their curriculums are left with a big question: How? They need to find the right middle ground, said Steve Weber, vice provost of undergraduate curriculum and education at Drexel University. Educators can’t completely prohibit use of the tool and neglect to teach it, but they also can’t allow its use with no constraints, he said.
“It may be a good tool to use in certain later courses, especially those that are preparing students for careers in industries,” Weber said.
One professor at Washington University structured his final exam so students would generate ChatGPT responses with a prompt and correct the text in a way that only a human well-versed in the topic could do. At the University of Southern California, business professors are experimenting with “TA chatbots” that will help answer logistical questions about the class syllabus.
Harvard University, meanwhile, relies on a duck-themed bot to answer student questions about its CS50 introductory computer science course. The “CS50 Duck” is designed to explain lines of codes and advise students on how to improve their programming. Such tools could work for all sorts of university departments, said David Malan, a Harvard professor who teaches the CS50 course. For now, though, integrating AI into classroom work is mostly relegated to technical fields.
“I’m sure it will take time for folks to decide for themselves how they’d like to address, if not incorporate as well, these new tools into their classrooms,” Malan said.
In some cases, professor-approved AI is spreading beyond the computer lab. At the University of Pennsylvania’s Wharton business school, Ethan Mollick was one of the first educators to add an AI policy to his syllabus. The associate professor expects students to use AI and ChatGPT thoughtfully, while knowing the technology’s limits.
Read More: AI Tries to Flirt, Lie and Even Mimic You to Find Your Next Date
ChatGPT has helped make it clear that many students are just trying to pass classes to obtain their degree, said Arya Thapar, a rising junior at Chapman University. Unchecked, it’s not going to foster a love of learning or build critical thinking skills.
But universitywide policies have been slow to take shape. Drexel University is still hammering out its guidelines, but they’re expected to include the idea that students “don’t use it if it is not permitted, and if you do use it, then the usage must be cited,” according to Weber.
At Washington University and the University of Southern California, the use of AI in classrooms still remains within the professor’s discretion.
“The technology is evolving so quickly,” said Peter Cardon, professor of business communication at USC, “you really depend on the community to help you make informed decisions.”
But the uncertainty can create gray zones for students. If a professor doesn’t say anything about using AI in class, is it allowed — or could students face disciplinary actions?
That makes it a threat unlike other classroom technology helpers, like calculators. “It feels more like a profound change,” Washington University’s Fournier said.
“Our goal would be that we don’t think backwards like last semester.”
A student at Santa Clara University said that ChatGPT single-handedly improved their grades in economics and was extremely helpful. The chatbot would generate answers that the student didn’t fully understand, but were good enough to get full scores on problem sets and quizzes.
The student, who asked not to be identified because of the ethical questions surrounding ChatGPT, compared the situation to being a child of divorce: Each parent has different rules, and the guidelines become confusing without a unified approach.
A key step is to educate faculty on what ChatGPT actually can and can’t do, said Ramandeep Randhawa, senior vice dean for the USC Marshall School of Business.
“Our goal would be that we don’t think backwards like last semester,” he said. “Everyone is going to be racing against the clock continuously.”
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.