Kate Crane鈥檚 advice to professors worried about new artificial intelligence (AI) tools comes down to two simple words: don鈥檛 panic.
An educational developer with Dal鈥檚 Centre for Learning and Teaching (CLT), Crane says, 鈥淚 would just want to remind professors that they should prepare, but they don鈥檛 necessarily have to turn their worlds upside down.鈥

Kate Crane, educational developer (Nick Pearce photo).
Sometimes it feels like everything, not just teaching, is being turned upside-down by new generative AI tools.
Media are experimenting with articles written and illustrated by AI. Sometimes this goes wrong, as when Microsoft made the news for apparently AI-written travel articles that recommended Ottawa tourists visit a food bank 鈥渙n an empty stomach.鈥 Meanwhile, search engines and blogging tools are incorporating AI assistants and chatbots, and photo-enhancing software is using AI to blur the lines between reality and touch-up more than ever.
And your feeds are likely full of AI images, some of them passing themselves off as real. (No, those babies are not really parachuting, and that isn鈥檛 Van Gogh sitting on the front steps of his house at Arles.)
But what does it all mean for teaching, learning, and academic integrity? Does widespread adoption of ChatGPT mean the end of the essay as a meaningful evaluation tool? Are Dal鈥檚 academic integrity officers about to be swamped? And should professors ban AI, incorporate it, or embrace it?
A pedagogical problem and an integrity issue
In an online workshop held earlier this year, Computer Science professor Christian Blouin said AI tools like ChatGPT represent 鈥渁 pedagogical problem that has a short-term academic integrity issue 鈥 and we need to sort ourselves out very quickly.鈥 Dr. Blouin is Dal鈥檚 institutional lead for AI strategy, and he says it doesn鈥檛 make sense for a university with as many programs and disciplines as Dal to have one blanket policy on acceptable use of AI by students.
Related reading:听Dal's AI lead aims to spark conversation and connection on our rapidly evolving information future (Dal News, July 25)
鈥淚n computer science, we鈥檙e thinking about AI-driven tools differently than in engineering, for example,鈥 Dr. Blouin said in an interview. Meanwhile, in the arts and social sciences, 鈥淲e are assessing critical thinking, but the medium through which we do that is writing.鈥
The discussion around AI tools quickly draws us from specifics to big-picture questions: What are universities for? What is the purpose of assignments? What are we assessing and evaluating?
When it comes to essays, for instance, 鈥淭he point is not so much that you wrote something, but the process of thinking, and the process of articulating what鈥檚 underneath,鈥 Dr. Blouin says. 鈥淎 tool is not an agent, it鈥檚 not a person... People come to university so they can become citizens and professionals. And it鈥檚 really important that we provide them with an education and give them an assessment of their abilities in making decisions, and reasoning through, and thinking ethically.鈥

Jesse Albiston, pictured on the sofa wearing a cap, with colleagues from Bitstrapped and a robot dog.
AI in the workplace
Jesse Albiston (BComm鈥14) is a founder and partner at , a Toronto-based consulting firm specializing in machine learning operations and data platforms. In short, they help companies figure out if and how they should be using AI.
The AI revolution has been good for Bitstrapped. Albiston says the company booked more work in the first quarter of this year than all of the previous year. At the same time, he cautions against jumping on the AI bandwagon just because that鈥檚 what everyone else is doing. When the firm is approached by clients who want to integrate AI into their workflows, 鈥淗alf the time 鈥 maybe more than half the time 鈥 AI is not the right approach,鈥 he says.
At the same time, he thinks learning how to use these tools should be an essential part of a university education 鈥 at least in some fields 鈥 because they are going to be an essential part of the workplace.
鈥淚f someone graduates university today, they should be using these tools. You鈥檙e not going to be replaced by AI. You鈥檙e going to be replaced by people using these tools,鈥 Albiston says. 鈥淚n my company, I have employees one or two years out of university who are using these tools, and their output is fantastic. They just need a bit of coaching on how it works.鈥
But if they 鈥渏ust need a bit of coaching,鈥 is that something a university should be providing? Dr. Blouin is not so sure. He says graduates will definitely encounter AI integrated into tools like office suites. But universities should take a longer-term view, preparing students for careers that will last decades. (How many of us learned high school tech skills we never used again, because technology had moved on?) That means thinking beyond ChatGPT and related large language model (LLM) tools.
Even if professors do want to integrate tools like ChatGPT, Crane says they should proceed with caution. While she believes 鈥渆xperimentation is good,鈥 she notes that at Dal, instructors are not allowed to require students to use AI for coursework. Apart from any pedagogical concerns, 鈥淭here are data privacy concerns,鈥 she says. The CLT says on its website that making the use of AI tools mandatory for a class contravenes Nova Scotia privacy law and 新澳门六合彩开奖鈥檚 Protection of Personal Information Policy.
Process over output
English professor Rohan Maitzen, who teaches both literature and writing, feels 鈥渞esentment towards the people who are propagating these systems on us without our permission.鈥 Teaching and learning writing is more about process than output, she says. And ChatGPT can鈥檛 help with that. But because it offers the promise of producing passable essays quickly and easily, Dr. Maitzen says she and her colleagues are worried.
鈥淲e can鈥檛 ignore the fact that this is a tool designed to take over the writing process,鈥 says Dr. Maitzen.
鈥淩ight from the moment you think, 鈥榃hat am I even going to write about?鈥 that begins your own individual process of figuring something out and putting your mind in contact with it. You can鈥檛 outsource that work to a machine. It鈥檚 an act of communication between you and the person you鈥檙e writing for.鈥
Dr. Maitzen has already received at least one assignment written by ChatGPT. One of the tool鈥檚 well-known shortcomings is that it is known to make up information, such as false citations and inaccurate 鈥渇acts.鈥 She assigned a reflection on a short poem and received an essay with one critical problem: 鈥淭he quotations the paper included were not in the poem. They don鈥檛 exist at all,鈥 Dr. Maitzen says. 鈥淪o it wasn鈥檛 a mystery looking at this paper 鈥 and looking at the relatively short poem that the paper was supposed to be about 鈥 there was just no correlation whatsoever.鈥
Avoiding an arms race
This, of course, brings up the question of cheating and academic integrity. Bob Mann is the manager of discipline appeals for the university secretariat. He said the number of cases referred to academic integrity officers has 鈥済one up dramatically鈥 in the last few years鈥攁lthough that might be because of greater detection. Mann said sometimes students are deliberately cheating, but often they 鈥渁re just trying to figure things out鈥 and 鈥渋nadvertently commit offences.鈥
He expects AI tools to make him busier this year. 鈥淚 call it Napster for homework,鈥 he says. But it won鈥檛 necessitate a change in academic integrity rules. 鈥淲riting a paper using AI is not a specific offence we have on the books; a student is required to submit work that is their own. So the rules have not changed.鈥
But determining what constitutes a student鈥檚 own work has (with exceptions, like the AI fabricated quotes Dr. Maitzen mentioned earlier) become harder. In terms of enforcement, Mann cautions against assuming students are using AI, saying he has seen cases where accusations proved to be unfounded. Students who struggle with English or who don鈥檛 understand how to cite properly may be