Teaching with Generative AI

 

The impact of Generative AI (GenAI) on Higher Education is being greatly felt in the classroom, where faculty are being called to re-consider their course learning objectives, policies, assignments, and more, in light of technological change. 

Please read below for guidance and materials to help you navigate teaching in the age of AI.

AI
End of Story Line

Guiding principles about AI

  • At CEETL, we approach AI and emerging technologies as tools that may enhance faculty’s teaching and scholarship, and may increase equity, accessibility, and engagement for students. 
  • We center faculty expertise, student engagement, and principles of equity and social justice. 
  • We recognize that disciplines may vary in their approaches to technology. 
  • We aim to assist faculty in teaching students about technology, teaching students how to use technology, and protecting students’ learning where necessary from the shortcuts promised by new technologies.   
  • We center critical perspectives and focus on the potential and limitations of various technologies, as well as the larger social, environmental, labor, and equity impacts.   
End of Story Line

Current Workshops for Faculty Development

AI

AI Workshop: AI Tools for Instructors

Can AI tools improve your teaching life? This interactive workshop will help you discover whether AI tools can enhance instruction, course materials, and student engagement. Learn about, explore, and critique new AI tools marketed to faculty.

book

Building AI Skills and Knowledge into the Curriculum

Where does AI fit in your curriculum? Hands-on workshop to revise SLOs and assignments to give students practice in using AI tools and to develop their knowledge of the potential and limitations of these tools. 

EOP

Critical AI in the Classroom - November 19th

Do AI tools promote bias? Are they exploitative? Environmentally harmful? Learn about the field of Critical AI. Hands-on workshop designed to wrestle with the ethical, social, and cultural implications of new technologies, and to discuss best practices in developing students’ critical AI literacy.

Guidance for Faculty

Get Started with AI

This section provides an introduction to AI tools and how they can support your teaching.

If you are just getting started with AI, we suggest that your first steps are to learn about AI, including its technical and ethical aspects, and how to use AI. 

At SF State there are a number of resources to support this initial learning. We encourage you to 

  • read through this page to develop a high-level understanding of AI. To dive deeper into AI and teaching, please review our AI Guidance document.
  • attend an Academic Technology and / or ITS workshop on prompting and using Microsoft Copilot. See upcoming events at https://ai.sfsu.edu/ai-events .
  • join an upcoming AI workshop at CEETL to discuss AI and pedagogy and to learn strategies for the classroom. You can also review past workshop documents, summaries, and recordings on this page.

While we will suggest many promising uses of AI on this page, we strongly encourage faculty to consider the ethical and practical limitations of AI tools, especially in higher education. AI presents numerous ethical concerns. In the next section we describe 10 distinct ethical concerns related to AI. For more resources on the ethics of Generative AI, please read:

In addition to ethical concerns, AI has practical limitations. AI mimics “observed patterns in language probabilistically, GPT-based systems” (Teaching Critical AI @ Rutgers). AI does not “know” or understand what it produces and it can produce misinformation, called “hallucinations,” in its responses. Even when the AI writing tool doesn’t hallucinate, it can produce a probabilistic output that may not reflect what you intend or may be subtly culturally biased.

The brand of AI tool that you use (i.e., Copilot, ChatGPT, Claude) and whether it’s a paid version will also affect the output. Critical examination of outputs is highly recommended as well as using our university-licensed access to Microsoft Co-Pilot, which has enhanced data privacy protections. You can learn how to access Copilot at this useful guide created by Academic Technology

To learn more about AI’s practical limitations, we suggest Trent and Lee’s lay-peron’s overview of how LLM’s work: https://www.understandingai.org/p/large-language-models-explained-with

Much of the recent discussion on AI in higher education has focused on student uses of AI, but there are many ways instructors can use AI to assist their own instruction. Here are a few suggestions on using AI tools to enhance your teaching: 

  • Generate new activity ideas for a lesson you have taught before 
  • Generate a lesson plan based on your learning outcomes, which you could use as a first draft to build upon and modify  
  • Create ideas for an assessment, which you can then refine 
  • Develop a grading rubric (check here for guidance)
  • Create modules based on your course description
  • Brainstorm discussion questions and inductive lessons   

You can find more ideas at Dr. Cynthia Alby’s AI Prompts for Teaching: A Spellbook. You can also borrow from CEETL a copy of Antonio Bowen’s and Edward Watson’s book on teaching with AI, which has many example use cases. Please send an email to ceetl@sfsu.edu.  

If you bring AI into course preparation, understand that AI is not an expert in any discipline nor can it consider your course context. AI outputs are a starting point for further inquiry and development, especially with teaching materials. We recommend viewing the past workshops tab in this section, which includes summaries and links to our distinct Fall 23 and 24 workshops on AI Tools for Faculty.

AI Tools for Faculty Fall 2024 AI for Faculty 

The Fall 24 workshop on "AI Tools for Faculty," focused on exploring the practical use of AI tools to enhance teaching materials and increase student engagement as well as its benefits and limitations. Facilitator Jennifer Trainor shared guiding principles, a demonstration of how to use and critique AI to develop course materials, and  led a discussion where faculty shared how they use AI in teaching.  Some key takeaways are:

  • Guiding Principles: 1) AI technologies can enhance equity, accessibility, and engagement but should not automate critical faculty work or encourage shortcuts in student learning.  2) The suitability of AI tools varies by discipline and context.  3) An open but critical approach to AI is essential, balancing its potential with awareness of social, environmental, labor, and equity impacts. 
  • Demonstrations of Benefits & Limitations: Copilot assisted with brainstorming, creating unit plans, handouts, and gamification strategies, but it was shown that these tools require faculty oversight and customization, and that AI-generated outputs may carry inherent biases. 
  • Discussion of Generative AI: Faculty shared their uses of AI for teaching, and observed how the AI output depended on computing resources and prompts, and they considered how customizing AI with institutional data might be useful.

      View the Fall 2024 AI Tools for Faculty Recording for the full workshop content.

 

AI Tools for Faculty Fall 2023 AI for Faculty 

The meeting focused on exploring the use of ChatGPT and other AI tools in teaching. Participants shared how they used Generative AI to draft outlines for lessons, generate thank-you notes or recommendation letters, and summarize complex information.  Some lessons discussed in the session were:

  • AI is a Starting Point: Faculty found ChatGPT useful for breaking creative blocks and drafting outlines.
  • Customization is Essential: AI-generated content lacks personal voice and context, and AI is prone to hallucinations, requiring instructors to check for misinformation and personalize to their course and students. 
  • Privacy Concerns: Participants raised concerns about inputting personal information or student data into AI systems.
  • Learning vs. Efficiency: Faculty worried that over-reliance on AI might undermine the learning process by prioritizing fast outcomes over deeper engagement with material.
  • Ethical Guidelines Needed: Faculty emphasized the need for clear ethical guidelines in AI use, including ensuring students can opt out of AI-based assignments and acknowledging AI contributions in any scholarly or educational context.

View the Fall 2023 AI Tools for Faculty Recording for the full workshop content

End of Story Line
End of Story Line

Center Critical & Responsible AI in your Work

In this section, you will learn about the ethical concerns surrounding AI and how to take a JEDI approach to AI use.

To take a JEDI approach to AI, you should consider reading and discussing many of the ethical concerns addressed in the following section. This might include:  

  • Exploring the exploitative labor practices behind AI companies as well as how AI creates new forms of labor
  • How AI companies scrape freely accessible sources and then use paywalls to restrict the use of this material
  • Uncovering how apps students may use like Grammarly or ChatGPT can be predatory
  • Using Chat with students (see Teaching with AI) to show how it produces “hallucinations” that appropriate cultural identities and rhetorics
  • For more tips, check out SF State AI Guidance Document Section 3 on Strategies for AI in the Classroom 

You might consider the following questions as you develop a JEDI approach to AI use:

  • What does critical AI mean to you? 
  • Which ethical concerns related to AI are most relevant to your work? 
  • What are your ideas about how can we implement JEDI principles at SF State and mitigate these ethical AI concerns?

“In the pursuit of educational excellence, we must remember that AI, like any powerful tool, can be a double-edged sword. As we embrace JEDI (Justice, Equity, Diversity, and Inclusion) principles, it is imperative that we scrutinize the impact of AI on the digital divide. Are we bridging gaps or exacerbating disparities? Let us not just employ AI in education, or conversely, completely write off the use of AI in the classroom, but consider its uses and limitations with intention and empathy, ensuring that every student is set up for academic success.” 

–Dr. Kira Donnell, JEDI Faculty Director at CEETL and Lecturer Faculty in the Department of Asian American Studies

Did you know that SF State offers a Graduate Certificate in Ethical Artificial Intelligence? 

Read more here: https://bulletin.sfsu.edu/colleges/science-engineering/computer-science/certificate-ethical-artificial-intelligence/ 

While there are many ethical issues related to AI (see Montemayor, 2023), we highlight AI’s capacity for bias, discrimination and data privacy. These issues affect faculty and students, and certain uses of AI may even violate FERPA regulations.

Algorithmic systems reproduce the biases in their training data. The AI image generation tool Midjourney, for example, has been shown to produce biased results: when prompted to generate profile photos for professors in different departments the majority of images were white professors and mostly male (Growcoot, 2023). 

When you engage with Generative AI, it may use the interaction to train its LLM. This presents a threat to your own privacy, and, if you are working with student data, to your students. We recommend to:

  • Never input student data into a Generative AI tool

Use your Microsoft Copilot SF State license, which has enhanced data privacy protections. You can learn how to access Copilot at this guide created by Academic Technology

This workshop introduced the concept of Critical AI and explored its implications in the classroom. It focused on three core themes: the ethical, social, and cultural dimensions of AI; strategies for integrating AI in educational practices; and approaches to fostering critical thinking about AI. Some key takeaways are:

  • AI has the potential to reduce educational inequities, provide personalized learning, and improve teacher efficiency, but it also risks dehumanizing education and perpetuating biases.
  • Both students and instructors should use AI to enhance critical thinking, personalized learning, and ethical awareness, while questioning its limitations and accuracy.
  • Critical AI literacy should be taught to help students and educators assess AI's ethical implications and understand its potential biases and transparency issues. 
  • Teaching Critical AI involves critical thinking, encouraging skepticism and questioning; critical pedagogy, a focus on social justice and inequality; and critical literacy, analyzing and critiquing AI for potential harms and inaccuracies. 
  • Academic integrity should focus on teaching, not policing, and integrating AI tools as part of professional learning.

Please view the Spring 2024 Critical AI Workshop Slides.

To adopt a critical stance toward AI, it is important to first understand the ways that AI reinforces systemic biases and harms. The following list introduces an array of ethical concerns relevant to higher education that AI scholars have identified; these categories are adapted from Leon Furze’s (2023a) list of ethical concerns and Adams’ et al (2023) list of ethical concerns for education.

End of Story Line
End of Story Line

Algorithmic systems are only as unbiased as the data they are trained on, which is to say that algorithmic systems like Gen AI are biased and discriminatory. The AI image generation tool Midjourney, for example, has been shown to produce biased results: when prompted to generate a profile photos for professors in different departments the majority of images were white professors, and mostly male (Growcoot, 2023). Generative AI “indiscriminately [scrapes] the internet for data,” such that its dataset likely contains “racist, sexist, ableist, and otherwise discriminatory language” which then produces “outputs that perpetuate these biases and prejudices” (Furze, 2023b). In addition to biased data, other forms of bias and discrimination impact Generative AI: the design of the AI model itself, unjust applications of AI outputs, and real-world forms of bias and discrimination.

  • Even though AI is digital and may feel like it has no environmental impact, it in fact requires the use of “rare earth minerals and metals” and large data centers to operate. A study by University of Massachusetts Amherst found that training a single large language model (they looked at ChatGPT-2) can have five times the carbon-footprint of the average lifetime emissions of an American car (Hao, 2019)! 
  • If you would like to explore how to address this ethical AI concern further in your work or your course, you might consider resources at the SF State Climate HQ Faculty, don’t miss out on the faculty learning community led by Assistant Professor Carolina Prado.
  • Perhaps one of the most salient ethical concerns on higher education campuses is how Gen AI impacts truth and academic integrity. AI language models can both produce false or “hallucinated” information, including deep fakes, and they can be used to author content for the Gen AI querier. Please see our suggestions for how to teach with AI and our AI Guidance document for how to minimize such uses in your classroom. 
  • Another ethical concern for instructors to consider is AI detection software. Such tools are inaccurate and only growing more inaccurate as AI development outpaces detection tools. Turnitin's AI-detection feature has a 4% false positive rate (Chechitelli, 2023). During AY  2022-2023 at SF State, over 86,000 assignments were run through Turnitin; with a 4% false positive rate, that means that nearly 3,500 assignments may have been falsely flagged as being AI-generated. And, AI detection is significantly more faulty for English language learners with a recent study by computer scientists at Stanford showing that AI had a staggering ~60% false positive rate for papers written by English language learners (Liang et al., 2023; Myers, 2023).

The content used to train AI is scraped from the web; consequently, the work of authors and artists available on the web was given without consent. We must ask, when AI produces an image or text, whose composition or voice is being exploited? Who can we can we attribute the authorship to—the Gen AI user, the machine, or someone/thing else? Currently, US copyright law does not copyright images created by GenAI (Prakash, 2023).

 

The training data for Gen AI includes personal information. Much, if not all, of what we do online is “data-fied,” that is, turned into datapoints to measure, classify and compare us as users for the purposes of advertising, surveillance, and other business interest. Like the authors and artists whose work was / is exploited to train Large Language Models (LLMs), individual users did not consent to having their data train such AI models. Consider the privacy of yourself, your students and your colleagues when using Gen AI.

Technology advocates often position technology as leading to greater efficiency and automating human labor. In reality, Gen AI creates new forms of labor, and in the training of LLMs required the exploitation of the “global underclass” (Gray and Suri, 2019, see an interview of Mary Gray here). In education, digital technologies have often led to new forms of labor such as increased pressure to document work and student learning outcomes, and this pressure often is differentiated across faculty rank, gender, and race (Selwyn et al. 2018).

There is considerable financial investment, including in the education sector, into developing AI that analyzes and classifies human facial expressions according to affect. Such tools overlap with concerns about student privacy and data rights, the potential for bias and discrimination, and surveillance. Consider for example how bias might factor into such AI tools: Proctoring software that uses facial recognition algorithms have already been shown to discriminate against students of color and, in particular women of darker skin tones, based on its training data (Yoder-Himes et al., 2022). What cost might there be to students, particularly students of color, if educational AI tools developed to recognize affect to show engagement, for example, misclassify their affect?

The digital divide is both an issue of access and literacy. Access to technology, including the appropriate devices, stable internet connection, and widely-used software, is increasingly required to participate in schools and broader society in today’s digital age. AI, too, spreads unequal “benefits and risk within and across societies” (Mohamed, Png & Isaac 2020, p. 661); in the age of AI, access to paid version of AI tools as well as literacy in using Gen AI will likely become increasingly important to students and their careers. As you determine your own course policies on AI, consider how AI literacy might be important to your students and your field in the next 5 to 10 years.

Just because AI can be used for an assignment or in your course does not mean that it should be. According to SF State Professor Jennifer Trainor, educators should consider whether the use of AI supports existing learning goals, develops students’ information and critical AI literacy, and promotes students’ sense of agency and confidence in their own voice and human judgement. In addition, students should have the option to opt out of AI use. You might consider the pedagogical appropriateness of AI by reviewing questions from the EdTech Audit.

Each of these ethical concerns speak to how AI is a site of power struggle. The design and dataset are places in which worldviews and biases are encoded into AI, and the application and interpretation of AI outputs can have systemic ramifications. In addition, developing and maintaining AI is costly, meaning that “powerful AI is increasingly concentrated in the hands of those who already have the most” (Furze, 2023a).

End of Story Line
End of Story Line

Bring AI into the Classroom

In this section, you will find material to support you in integrating AI tools into your teaching.

There are at least three broad questions instructors should take up in designing syllabi policy statements on AI: Where should AI use be prohibited to protect student learning? What should students learn to do with AI and how do you want them to document their use? What should students learn about AI, including its how it works and its limitations, in your class? These approaches are exemplified in the syllabi policies created by SF State Professor Jennifer S. Trainor for writing courses. You can find example policy statements for other disciplines such as the arts, business, computer science, the sciences and more at this crowdsourced list of syllabi policies from other institutions

We recommend that in addition to crafting a syllabus policy, you engage students in discussing the ethics of AI and academic integrity and honesty. We also strongly advise against using plagiarism detection software because it is unreliable and biased against multilingual students (see Myers, 2023). 

Please read the SF State AI Guidance Document Section 2 AI Syllabus Policies. There you will learn three different approaches to crafting your syllabus policy statement. 

 Faculty at SF State are using AI to enhance student learning and engagement. Some of their strategies include:

  • Asking students to rewrite or critique a paper that AI produced 
  • Using AI to critique student writing and suggest revisions; ask students to analyze the suggestions: do they agree with them? Why/why not?  
  • Asking students to use Microsoft Copilot to come up with discussion questions and then having students rank and rewrite the questions as needed 
  • Asking students to debate Microsoft Copilot
  • Create role-playing scenarios for students 
  • Generating 3 solutions to a problem using AI and having students rank the solutions and provide reasoning about their ranking 
  • Querying 2 different AIs to write a paper and then having students evaluate and analyze the strengths and weaknesses of each paper 
  • Exploring limitations of AI with Oregon State’s revised Bloom’s Taxonomy, which distinguishes between AI capabilities and distinctly human skills at each level of thinking 
  • For other examples, check out Jason Johnston’s LinkedIn post on AI Assignment Flipping and this crowdsourced slide deck of how teachers around use AI with their students 

However you plan to address AI in your course, whether that is to disallow or incorporate AI, you should remain grounded in how students think and feel about AI and how they use it. For example, Chan & Hu (2023) found that students were concerned about data privacy, ethics and the impact of AI on their personal and professional development. Similarly, Inside Higher Ed recently reported over half of students surveyed did not want to engage with AI at all, citing a host of concerns. These findings stand in stark contrast to the all-too-common view that all students are using AI and that they are using it inappropriately, and they show a need for instructors to discuss AI with their students. 

So, how might you talk to your students about AI? Here are some suggestions:

  • Have conversations with your students about Academic Integrity at SF State throughout each semester, including the purpose and importance of transparency in knowledge construction.  
  • Emphasize to students how AI may or may not be used in your class. Consider adding a statement about Generative AI usage in your syllabus. 
  • Encourage students to ask you questions about your Gen AI policy and additional questions about Gen AI usage throughout the term (University of Pittsburgh, 2023; Autumn Caine). 
  • Discuss best practices in areas of authorship, attribution, intellectual property; help students understand that they are responsible for the material that comes under their byline. 
  • Help students understand how AI shortcuts may harm their learning and negatively impact their future. Discuss the skills they will need in their futures, how your class helps develop them, and where AI does and does not fit in. 

Whether you are creating assignments that integrate AI tools or looking to make your assignments “AI resistant” creating buy-in and purpose will be paramount. For guidance on how you might create assignments that teach students about AI, help them learn AI skills, and / or protect student learning from AI shortcuts, please follow our guidance document here. 

Just because AI can be used for an assignment or in a course does not mean that it should be. SF State Professor and CEETL Faculty Director Jennifer Trainor asks us to consider whether the use of AI supports existing learning goals, develops students’ information and critical AI literacy, and promotes students’ sense of agency and confidence in their own voice and human judgment. These considerations are essential as AI has the potential to harm the development of foundational skills and learning processes, and it is inherently biased toward White, Eurocentric ways of knowing, being and communicating (see for example Jenka’s (2023) Medium article on AI and the American Smile). 

Below we have outlined a few questions that you can consider to critically approach integrating AI into your course or curriculum:

  • Consider whether you would integrate AI tools in your own teaching practice and if so, how? 
  • How will you discuss AI in your classes? 
  • Share your encounters using AI with your students. What were some of the pros and cons that you experienced? 
  • How might you accentuate AI ethics and student responsibility around using AI in learning? 
  • Would you demonstrate how AI tools can be used to develop better understanding of texts, to build on rough drafts, and shape comprehension of complex processes?
  • What framework would you consider while bringing use of AI tools into your courses and curriculum? 

To learn more about a suggested framework for responsible use of AI, please view CEETL’s Fall 2024 Workshop on Building AI Skills into Your Curriculum. 

End of Story Line
End of Story Line

Protect Learning in the Age of AI

In this section, you will learn how to safeguard student experiences and learning in your classroom. You will also find suggested ways to guide students in maintaining academic integrity in the use of generative AI.

A leading concern for SF State faculty around Generative AI is academic integrity. CEETL recommends that faculty take a teaching approach, rather than a policing approach. A focus on teaching emphasizes ways to deter inappropriate uses of GenerativeAI such as building students’ understanding of the purpose of your course and creating student buy-in.   

Another reason we encourage a teaching approach is that AI Detection tools are inaccurate and discriminatory. Turnitin's AI-detection feature has a 4% false positive rate (Chechitelli, 2023). During AY  2022-2023 at SF State, over 86,000 assignments were run through Turnitin; with a 4% false positive rate, that means that nearly 3,500 assignments may have been falsely flagged as being AI-generated. And, AI detection is biased against English language learners. A recent study by computer scientists at Stanford showed that AI had a staggering ~60% false positive rate for papers written by English language learners (Liang et al., 2023; Myers, 2023).  

Instead of using AI detection tools, we recommend other teaching strategies outlined in CEETL’s AI Guidance Document Section 1 on Academic Integrity

The academic senate approved a resolution on Generative AI in Teaching and Learning in May 2024. You can read the full resolution here: https://sfsu.policystat.com/policy/16405204/latest/.

Highlights from this report include:

  • Encouraging faculty to learn about GenAI and think critically about AI tools
  • Discouraging AI detection tools
  • Advising faculty to use Section II of the AS #S22-298 Academic Integrity Policy to explicitly discuss and teach academic integrity, especially around AI, in their courses. 

One of the most effective ways to deter all forms of plagiarism, including illegitimate AI use, is to create student buy-in. ‘Buy-in’ means helping students understand the purpose and benefits of assignments. Here are some suggestions on how to create student buy-in:

  • Do away with or de-emphasize busy-work; instead focus on assignments that are purposeful and clearly connected to the course learning objectives 
  • Discuss the purpose of assignments and activities so that students know how they are connected to learning goals and their own development 
  • Emphasize the process of learning by asking students to annotate their work, create drafts, and submit process notes 
  • Engage students in knowledge creation by challenging them to apply learning to real-world cases and contexts, especially those that are connected to their lives or future goals.

When you suspect that a student has violated the parameters for AI use in your assignment or syllabus, we encourage you to take a restorative justice approach. Restorative justice refers to a set of practices aimed at building healthy communities, increasing trust and relationships, decreasing problematic behavior, and repairing harm and restoring relationships (Wachtel, 2013 as cited in IIRP, n.d.). It is defined as a philosophy and theory of justice that emphasizes relationships over punitive practices, and that takes a community-oriented approach to find solutions. 

For example questions and steps on how to take a restorative justice approach to inappropriate AI use, please view this document.

  • Academic Integrity in the Age of AI- Spring ‘24
    • Critical AI Framework: The workshop emphasized critical thinking, pedagogy, and literacy in AI, encouraging educators to address AI's social justice implications and promote critical questioning of AI outputs.
    • Ethical AI Integration: Participants explored AI's ethical challenges, such as bias and intellectual property, and discussed its potential benefits, including reducing educational inequities and enhancing learning through personalized and adaptive learning tools.
    • Teaching, Not Policing: The focus was on guiding students in responsible AI use, rather than solely relying on detection tools like Turnitin, which can foster bias, particularly among multilingual learners.
    • AI's Educational Impact: While AI can democratize education and assist teachers, concerns about AI dehumanizing teaching, outsourcing creativity, and perpetuating bias were also highlighted.
    • Interactive Learning: Activities included AI ethics debates and hands-on projects, encouraging students to engage with AI tools critically, examine biases, and reflect on AI's role in shaping knowledge.

 

View the Spring 24 Workshop on Academic Integrity recording here.

 

  • AI Tools for Students- Fall ‘23
    • Traditional vs. Generative AI: Traditional AI tools like Grammarly automate tasks using predefined data, while generative AI, such as ChatGPT, creates new content based on existing data, allowing for more dynamic interactions like generating text and code.
    • AI Tool Demonstrations: Students showcased AI tools like Quillbot, ChatGPT, Cursor, and Google Bard, highlighting their use for refining writing, summarizing complex readings, assisting in coding, and real-time research.
    • Ethical AI Usage: The meeting emphasized the need for educators to develop guidelines on responsible AI use in academics, encouraging open discussions and transparency in citing AI tools.
    • Challenges with AI Detection: AI plagiarism detection tools like Turnitin struggle to distinguish AI-generated content from student work. Faculty are encouraged to engage with students and recognize shifts in writing style as possible signs of AI use.
    • AI in Critical Thinking and Real-World Applications: AI should enhance, not replace, critical thinking. Assignments requiring personal reflection are less prone to AI misuse, and integrating AI tools prepares students for real-world professional applications.

 

View the Fall 23 Workshop on AI Tools for Students recording here.

 

  • Writing with Integrity in the Age of AI - Fall ‘23
    • Shift from Policing to Teaching: Educators should move away from punitive measures like Turnitin and instead focus on integrating academic integrity into instruction, fostering responsible AI use among students.
    • Redesigning Assignments: To reduce AI misuse, assignments should emphasize clear learning goals, reflection, and engagement, moving beyond tasks perceived as busy work by students.
    • Challenges with AI Detectors: Current AI detection tools frequently flag multilingual students, adding bias. AI-generated content may not be traditional plagiarism but still requires accountability for misuse.
    • Critical AI Literacy: Students should learn to use AI critically, recognizing both its potential benefits and limitations, aligning with broader educational goals like equity and social justice.
    • Syllabus Approaches to AI: Three options were discussed: banning AI use, requiring transparency and citation of AI tools, or incorporating AI as a learning tool, with a focus on ethical and critical engagement.

 

View the Fall 2023 Workshop on Writing with Integrity recording here.