Teaching & Learning with AI

Overview

When we think about the ongoing advancements in AI and its influence across multiple sectors, it becomes crucial to reflect upon its potential contribution to the field of higher education. 

AI Large Language Models, such as ChatGPT, Gemini, and Claude are disrupting the workforce by providing increased productivity and impressive analytical, design, and communication tools. As a result, new career opportunities are emerging while others become obsolete. Similarly, these models are expected to have a comparable impact on higher education. In light of this, CETL recommends that faculty approach AI in a thoughtful and nuanced manner. 

To address these issues, CETL has established a variety of program and workshop. Learn more and sign up here.

Creating a generative AI Policy

Cal State LA’s  academic honesty policy is under revision to include AI, as the misuse of AI generally falls under standards regarding plagiarism and the misrepresentation of work. However, because of the growing ubiquity of AI tools and confusion regarding its use, it is good idea to develop your own AI policy.

That depends! 

Faculty who oppose all use of AI may need only a few sentences that: 

  • Identify your wholesale ban of AI.
  • Explain why you think this ban is important to the work you are doing and how use of AI can lead to the deterioration of hard-earned skills. 
  • Identify (with examples) the specific consequences of misuse. 

Faculty who encourage the use of AI in all circumstances would be advised to: 

  • Identify (with examples) how AI might be used. 
  • Explain why you think the embrace of AI is important to the work you are doing. 
  • Explain that AI plagiarizes from copyrighted sources, shows bias, makes things up, and fabricates entire sources. In the end, these tools should be fact-checked and used with careful deliberation since the student, not the AI, will be responsible for any errors. 
  • Clarify (and demand) that students cite AI tools with both references and in-text citations. 
  • Link back to the broader Cal State LA academic honesty policy. 
  • Be prepared to teach students how to use AI for respective coursework.

Faculty who fall somewhere in between should: 

  • Identify (with examples) the uses and times you consider AI appropriate and the uses and times you consider AI inappropriate. 
  • Explain why you think your expectations are important to the work you are doing. 
  • Explain that AI plagiarizes from copyrighted sources, shows bias, makes things up, and fabricates entire sources. In the end, these tools should be fact-checked and used with careful deliberation since the student, not the AI, will be responsible for any errors. 
  • Explain that use of AI can lead to the deterioration of hard-earned skills. 
  • Identify (with examples) the specific consequences of misuse. 
  • Clarify (and demand) that students cite AI tools (when allowed) with both references and in-text citations. 
  • Link back to the broader Cal State LA academic honesty policy. 
  • Consider allowing student to co-create their policy so that there is greater student buy in.
  • Be prepared to teach students how to use AI for respective coursework.

Many! These will get you started. 

Pepperdine University has a nice decision-tree tool that can help you craft an AI policy—and they have generously made it freely available. It is a great tool to get you started. 

San Francisco State recommendations on AI Syllabus Policies include three sample policy statements, one focused on prohibition, one focused on transparency, one focused on incorporation and critical AI. 

Other resources to look at when crafting your policy include this Google doc on Classroom policies for AI Generative tools created by Lance Eason and distributed through the AI in Ed listserve and this Medium article entitled Update Your Course Syllabus for ChatGPT

Finally, tools like ChatGPT, Gemini, and Claude.AI can help you draft your policy. But if you use them, tell your students. It’s only fair.

Addressing misuse of Generative AI

Unfortunately, misuse of Generative AI tools is common. The best way to prevent such misuse is to create meaningful assignments that students will feel invested in completing and that build distinctly human skills, to have a clear Generative AI policy, to reward processes over products, and to use rubrics that do not reward work that an AI could do without human assistance. The following example of such a rubric is included in the Bowen and Watson (2024) Teaching with AI: A Practical Guide to a New Era of Human Learning. 

  Absent (0%) AI Level (50%) = F Good (80%) = B Great (100%) = A
Thesis, Ideas, Analysis (20%) There is no thesis or focus. The essay is focused around a single thesis or idea. The thesis is interesting and includes at least one original perspective. The thesis is original and there are compelling ideas throughout.
Evidence (30%) Almost no detailed evidence to support thesis. some evidence may be missing, unrelated or vague. Supporting evidence for all claims, but it is not as strong or complete. A variety of strong, concrete, and appropriate evidence with support for every claim.
Organization (20%) There is little or no organization. There is a clear introduction, body and conclusion, but some paragraphs need to be focused and/or moved Each part of the paper if engaging with better transitions, but more/fewer paragraphs and/or a stronger conclusion are needed Each paragraph is focused and in the proper order. Great transitions and the right amount of details for each point. Introduction and conclusion are complementary.
Language, Maturity (10%) Frequent and serious grammatical mistakes make meaning unclear. Writing is clear but sentence structures are simple or repetitive. The languageis clear with complex sentences and varied structures but could be clearer and more compelling. Creative word choice and sentence structure engance the meaning and focus of the paper.
Style Voice (10%) No sense of either the writer or audience. Writing is general with little sense of the writer's voice or passion. The essay addresses the audience appropriately and is engaging with a strong sense of voice. There is a keen sense of the author's voice and the writing conveys passion.
Citations (10%) Material without citations. Good citations but not enough of them. All evidence is cited and formatted correctly and mostly from the best source. All evidence is cited correctly and always from the best sources.

Proceed with caution. Unless the student admits to cheating, you will struggle to prove it. However, you can follow these steps to try and get a better understanding of what happened. 

  • Review the submission well (especially citations, which AI frequently invents). 
  • Prepare feedback (typically, AI generated writing is quite general and does not give specific textual evidence. A strong rubric that considers AI (see above) can help you out here. 
  • Have a conversation with the student. Ask specific questions about the assignment. See if the student can talk about what they wrote. If they cannot, probe deeper to find out what happened. 
  • Proceed based on the expectations of the AI policy you’ve created.

AI Detection Tools - Considerations & Limitations

Alas, you really can’t—not with one hundred percent certainty. The best way to detect AI usage is to play around with the major AI models yourself so that you can better understand their strengths and weaknesses. 

There are AI detection tools, but they both under- and over-identify writing. There are also third-party programs and services that take AI writing and re-paraphrase it so that it seems human. Furthermore, AIs have been known to take credit for human writing.  AI detection tools have also:

  • Identified original student writing as having been written by an LLM 
  • Been free one day and costly or unavailable the next 
  • Shared both the faculty members’ data and the data of students 
  • Readily identified writing from one LLM (such as ChatGPT) but not another (such as Claude). 
  • Readily identified writing from one LLM iteration (such as ChatGPT 3) but not an updated one (such as ChatGPT 4) 
  • Created an LLM arms race that shifts faculty attention from teaching to policing LLM use

Bottom line: it is very hard to prove that something has been written by an AI. Even if the detector is right, students might claim that it is wrong, and there won’t be a lot that you can do. 

To see the effectiveness of AI detectors for yourself, you can play around with these: 

Turnitin made AI detection temporarily available as a feature preview in Spring 2024. The Turnitin AI detection feature is now a paid add-on feature, with a significant additional cost to the campus. When the campus renewed its annual contract, Turnitin deactivated the feature on June 1, 2024. 

At this time, the campus does not have plans to separately purchase the AI detection feature in Turnitin. Turnitin has been less than forthcoming regarding a higher-than-expected false positive rate, first reported at 1%, but later revised to greater than 4%. Please see the Inside Higher Ed article Turnitin’s AI Detector: Higher-Than-Expected False Positives. Additionally, a recent study (Liang et al., 2023) from researchers at Stanford University indicated that AI detectors are susceptible to bias against non-native English writers, increasing the risk of false-positives for already marginalized learners.

Other notable universities have come to make the same decision:

Yes, the Cal State University Directors of Academic Technology group drafted the following statement:

Statement on AI Detection Platforms from the CSU Directors of Academic Technology

“Generative artificial intelligence (AI) is a rapidly developing technology with significant implications for traditional college education. In the short term, it is important to keep in mind that the ability to detect AI-generated content is currently not 100% reliable. Unlike traditional forms of plagiarism detection—which document the original source of the plagiarized text as evidence of academic dishonesty—no such evidence currently exists for AI detection platforms. While some AI detection platforms claim up to 99% accuracy, even a 1% potential false-positive or false-negative rate presents a considerable challenge to enforcing academic integrity because the instructor cannot "prove" their case to an independent observer. Additionally, a recent study (Liang et al., 2023) from researchers at Stanford University indicated that AI detectors are susceptible to bias against non-native English writers, increasing the risk of false-positives for already marginalized learners. Suggested strategies include constructive discussions between students and faculty, as well as clear communication about expectations related to AI use for assignments and assessments via course syllabi and assignment instructions.

Redesigning Assignments & Assessments

That is tough because the technology is changing and advancing rapidly. Assignments that cannot be done with AI this semester may be easy for AI to complete next semester. Here are some tips for now: 

  • Get an in-person writing sample the first week of class so you establish a baseline.
  • Require students to maintain a detailed revision history of their writing in Office 365 Word or Google Docs
  • Ask students to synthesize (not summarize) multiple SPECIFIC sources at once. 
  • Ask students to draw out or create a graphic representation of information. Then have them reflect in writing on how/why they created their graphic representation. 
  • Personalize the assignment so students must incorporate their own experience. 
  • Assign work that is presented live or that must be performed, such as a live Q & A, a Tik-Tok-like video, a live debate, or a no-notes presentation. 
  • Have students analyze original research or physical objects that you bring into class. 
  • Have students analyze lesser known/unpublished case studies.
  • Incorporate your lectures or other unpublished texts into the assignment.
  • Create assignments that are personally meaningful to students.
  • Make problem-solving social. 
  • Lean into the now (podcasts, multi-media submissions, Tik-Tok, etc.). Those are career relevant! 
  • Have students make board games or other physical objects show a deep and flexible understanding of a topic. 
  • Partner with community members, other classes, your department, the library, and other offices on campus to solve a problem that your disciplinary problem-solving methods could address.
  • Ask students to think about their thinking (metacognition). What do they know now that they didn’t know before class? What questions do they still have? What answers do they still need? What is the strength of this output? What is the weakness? 
  • Focus on reading with AI tools like Perusall (a Canvas External Tool) and Packback, that help students slow down and engage with content. 
  • Be creative! You are the human! Use your human creativity to reinvent what assignments and assessment can look like!

Enhancing Teaching & Learning with AI

Professionals will likely use LLMs more and more, and there is nothing wrong with allowing students to use them so that they can gain this career-relevant skill. But before you have students collaborate with and use generative AI, make sure they understand their strengths and weaknesses and that they know how to fact check information the AI may include. Here are some ideas: 

  • Question the AI (Prompt engineering). Collaborate with the AI by taking an assignment prompt and asking increasingly specific questions until you come up with a draft that you can tinker with. Have students turn in a record of the process and outcomes. 
  • Factcheck with the AI. The AI makes mistakes. Give the AI a prompt and have students (alone or in groups) fact check—especially sources. 
  • Evaluate with the AI. Give the AI a prompt and have students (alone or in groups) evaluate the strengths and weaknesses of the AI. 
  • Revise the AI. Give the AI a prompt and have students (alone or in groups) revise the AI with details/examples from their lecture notes. 
  • Challenge the bias of the AI. Give the AI a prompt and have students (alone or in groups) identify the biases with which the AI presents information. 
  • Research with AI. Have students ask the AI (Try Elicit or Research Rabbit, which are generative Ais that can help you find published research) for the 10 most cited/influential/whatever peer-reviewed sources on a topic. Then have students find those articles in the library databases on Google Scholar and have them find 10 more sources from the reference pages of the sources they found. Have them reflect on this process and the insights they gained. 
  • Assess with the AI. Have students select the criteria and make a rubric for an assignment. Debate and revise for a common, agreed upon rubric. 
  • Reflect with the AI. Have students do anything with the AI and then reflect on their processes, insights, experiences, new knowledge. 
  • Consider audience awareness with the AI. Have the AI produce a story/essay/marketing plan/whatever using the “voices” of different stakeholders. Then have students try to understand how/why those voices differ. Revise to make the voices more authentic. Reflect on the process. Or then have students write in their own authentic voice to add to the mix. 
  • Brainstorm or outline with the AI. Have the AI brainstorm a prompt and then have students reflect on the strengths/weaknesses/biases of the AI. 
  • Problem solve with the AI. Use prompt engineering to devise feasible solutions to a problem your course deals with. Have students evaluate, challenge, and revise the solutions. 
  • Create with the AI. Use prompt engineering to create an illustration/prototype/etc. Then have students reflect on why they made their creation the way they did and explain what their creation is trying to say. 

Be creative! You are the human! Use your human creativity to reinvent what assignments and assessment can look like! 

Next Steps

At the end of the day, the higher-order thinking students are asked to do will not change in the age of advanced AI. If anything, traditional ways of learning (like reading and writing) will become more important as they will always offer outstanding paths to gaining deep knowledge, becoming more metacognitive, and building the very human traits of self-knowledge, empathy, and perspective taking. Lean into that and let the classroom become the place where these things happen. Help students understand that they should invest in the skills that will help them manage AI, or else they will likely be managed by the AI. 

Bloom’s Taxonomy of Learning (below) has traditionally been helpful in helping faculty members and learning designers think about learning processes and what they are asking students to do. 

This taxonomy is still useful, but it might be helpful to now consider the distinctly human skills that humans still bring to the table. This revised Bloom’s taxonomy from Oregon State (below) identifies what AI can do, what humans can do, and what kind of assignments you might need to revise or amend so that you can foster distinctly human skills. Think about this graphic as you begin to reimagine what learning in your classroom can look like in this new age. 

https://ecampus.oregonstate.edu/faculty/artificial-intelligence-tools/blooms-taxonomy-revisited.pdf 

 

If you have any questions, please feel free to email [email protected] for more information.

 

A chart of a variety of colored squaresDescription automatically generated with medium confidence

Additional Resources

Interested on more information/tips on AI Academic Integrity? Please click the link below!