AI in the classroom

0
352
Computer Science Professor Proyash Podder, Ph.D, conducts a class in software engineering. Photo courtesy of Muhlenberg's Zenfolio.

With the inception of ChatGPT in 2022, now, more than ever, students have been granted new ways to learn. ChatGPT is an online artificial intelligence (AI)-powered chatbot with the ability to transform education. AI software is able to provide students with nearly instant answers to questions, brainstorming ideas and even fully completed assignments. AI has the potential to completely alter higher education, providing potential benefits and drawbacks. On one hand, professors can utilize AI tools to revolutionize their scholarship and teaching methods. Conversely, AI is ripe to be used to violate the Academic Integrity Code, with many features allowing plagiarism to go undetected. 

While other schools in the Lehigh Valley have publicly addressed the AI issue, Muhlenberg has yet to release its stance. Many institutions have drafted policies related to AI, one of which includes Moravian University in Bethlehem, Pa. They have adopted a tiered policy, meaning that professors can incorporate one of four different strategies into their syllabi. According to a 69 News story on the policy, the tiers include “One: unless specifically told to use AI, it’s off limits. Two: use it as a supportive tool but do so ethically and responsibly. Professors would then dive into what that means. Three: have AI tutor you but not generate your work. Four: use AI freely, but acknowledge when you do so.”

Provost Laura Furge, Ph.D., noted that faculty are obligated to state their expectations in their syllabus. “It would make sense that that should include what your expectations are for AI [and] use of AI as well,” Furge continued, “But we are just in the infancy of developing policy…That’s all I know about it at this point. I’ve asked them to do it.”

Currently, Muhlenberg faculty members have a variety of policies for AI use in their classrooms. “​​My policy is to work with students to understand the limitations, biases and risks of AI tools developed by corporations. My policy is to center student voices in conversations about AI rather than begin from a position that assumes students are dishonest,” said Professor of Media & Communication and Dean of Digital Learning Lora Taub, Ph.D. 

Ira Wolfe, adjunct professor in the organizational leadership and innovation and entrepreneurship programs outlined his policy, saying, “I don’t use it in class but I encourage students to use AI as a writing, research and creative assistant. With that, I caution students that cut-and-paste will not be tolerated. While AI can be used, personalization and critical assessment is always necessary.” 

Whether or not faculty members should be held responsible for catching their students’ AI-use is a matter that garners a wide breadth of perspectives. Commenting on the use of AI detection programs, Furge said, “There are systems where you can feed papers into it and have it check for artificial intelligence. That is fraught with issues as well, because that paper that you’ve just put into the system now becomes part of its database. And you have taken someone else’s intellectual property and taken it out of their hands in ways that you don’t have permission to do.”

“​​I do not accept the logics of EdTech surveillance capitalism and I refuse to approach technology from the standpoint that students are dishonest, cheaters and lazy,” said Taub.

Professor of Psychology Jeff Rudski, Ph.D., offered a different view, saying, “I do check for the ‘formulaic bothsidery’ writing produced by AI, and if I suspect it’s been used I’ll try a few prompts that might produce a similar paper. My hope is that by permitting its use (and stipulating how it can be used), I can avoid most instances of academic dishonesty.”

In regards to a potential campus-wide AI policy, faculty members were supportive of the idea but suggested some stipulations. “I think students have a right to know how their faculty and administration are deploying AI tools and that they have a right to non-consent to the use of their data in AI contexts. I think faculty have a right to know how the administration is deploying AI and what partnerships and deals they are establishing or may establish with tech companies that profit from our data. Any policy should center student voice and agency and privacy and not turn students into free labor for training corporate AI language models,” said Taub. 

“Yes, but there should be a great deal of latitude given to each professor,” said Rudski. “I see it as analogous to the ‘self-plagiarism’ policy…some of us are fine with some elements of that, like borrowing a paragraph here or there since it helps students see links across courses or disciplines, while some of us want students to create everything anew in every paper.”

Overall, the idea of a faculty-led policy was endorsed by Furge. “I would want for faculty to have the autonomy to state what their policies are. I think one of the things that would likely come out of our policy is that you have to state in your syllabus what your expectations are for [the] use of AI,” said Furge. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here