Many SF State faculty are reimagining their classrooms to address new technologies: embedding AI literacy skills and knowledge into their curriculum, talking with students about how tools shape their academic work, and redesigning assignments to ensure students are prepared for the future. CEETL is pleased to support faculty innovation and to partner with faculty as we work through the implications of new technology for our students. As part of that support, we have developed what we are calling Three Laws of Curriculum in the Age of AI, a play on Isaac Asimov’s “Laws of Robotics,” written to ensure that humans remained in control of technology.
Our Three laws are not laws per se; they are a framework for addressing the use of AI tools in the curriculum at all levels, from the individual classroom to degree roadmaps for disciplinary programs, from general education through graduate courses. The framework is designed to support curricular innovation as faculty incorporate emerging tools and technologies into their teaching.
The first law encourages faculty to teach students about Gen AI in ways appropriate to their discipline and course level. Teaching about AI means teaching students how the technology works, its biases and limitations, and its potential harms. The second law encourages faculty to develop teaching strategies for teaching students to work with AI tools where appropriate, to enhance student learning and to prepare students with the AI skills and familiarity they may need in the future. The third law encourages faculty to protect student learning from AI shortcuts by redesigning assignments to make them AI-resistant where possible.
For more on the Three Laws framework, look for our forthcoming essay in Inside Higher Ed.
Below, we use this framework to highlight SF State faculty curricular innovation and share resources for further reading and exploration.
Guiding principles about AI
- At CEETL, we approach AI and emerging technologies as tools that may be used to enhance teaching and scholarship, and to increase equity, accessibility, and engagement for students.
- We center faculty’s expertise and students’ experiences; we recognize that disciplines may vary in their approaches to technology.
- We center critical approaches to technology rooted in SF State’s shared principles of equity and social justice.

Other resources you may be interested in:
- CEETL's Second Annual AI Symposium
- CEETL Faculty Director Jennifer Trainor’s “The Big Picture,” a substack aimed at AI issues and innovations on our campus
- CEETL and AT’s Learning Lab Fast Challenge grant on faculty and department AI preparedness, launching in Spring 2025
- SFSU AI Literacy Program: Start your AI Journey
- CSU AI Commons

Law One: Teaching Students About AI

Students need to build knowledge about AI, including how the tools work as well as their social, cultural, environmental, and labor impacts, potential biases, tendencies toward hallucinations and misinformation, and propensity to center Western European ways of knowing, reasoning, and writing. Teaching students about AI aligns with core equity values at our university, and our JEDI (justice, equity, diversity, inclusion) framework.
To take a JEDI approach to AI, it is important to first understand the ways that AI reinforces systemic biases and harms. This list introduces an array of ethical concerns relevant to higher education that AI scholars have identified. We encourage you to partner with CEETL as you redesign assignments and activities to teach students about how AI tools work, and to educate them about the issues AI raises, from environmental justice to copyright violations to bias and hallucinations.
- Leon Furze’s Teaching AI Ethics series
- Adams’ et al (2023) list of ethical concerns for education
- Modern Language Association Guide to AI Literacy for Students
- The People’s Guide to AI for students
- Trent and Lee’s lay-peron’s overview of how LLM’s work
- Autumn Caine’s “Prior to” essay about the conversations to have with students before you introduce or encourage AI
- Maha Bali’s “What I Mean When I Say Critical AI Literacy”
- Ryan Watkins on updating course syllabi to include AI ethics and learning
- Jenka’s (2023) Medium article on AI and the American Smile.
- View CEETL's Fall 2023 Workshop on Writing with Integrity recording
- Discover how SFSU Computer Science Department is addressing AI
- Watch English Professor Jennifer Trainor’s video on how she developed students’ awareness of AI bias
- Read How I use AI to teach Linguistic Justice by Jennifer Trainor
- Read about SF State philosophy professor Carlos Montemayor’s work on AI
- View the Spring 2024 Critical AI Workshop Slides. In this workshop, Fatima Alaoui and Kira Donnell focused on defining Critical AI and on developing students’ critical AI literacy. Takeaways include:
- Critical AI literacy should be taught to help students and educators assess AI's ethical implications and understand its potential biases and transparency issues.
- Critical AI in the classroom involves honing students’ critical thinking, using critical pedagogy classroom methods, and developing students’ critical information literacy.
- Academic integrity should focus on teaching, not policing, and integrating AI tools as part of professional learning.
-
Learn about SFSU's Interdisciplinary Graduate Certificate on Ethical Artificial Intelligence.

Law Two: Teaching With AI


What do students need to know to work ethically and equitably with AI as these tools become increasingly embedded in the platforms and programs we already use, as they are marketed aggressively, and as they are integrated into the jobs and careers our students hope to enter? As Kathleen Landy recently wrote, Now is the time to move toward …collaborative articulation of learning outcomes relative to AI. What do we want the students in our academic programs to be able to do with generative AI? What AI skills do they need to be successful in our classes and beyond? This overview of AI classroom innovations across the CSU provides a sense of how our curriculum may evolve in the age of AI.
Faculty at SF State are exploring ways to use AI to support existing learning goals, such as the development of critical and analytic thinking and rhetorical awareness.
For example, faculty have created lessons that:
- Ask students to use Microsoft Copilot to come up with discussion questions and then have students rank and rewrite the questions as needed in order to develop students’ analytic thinking;
- Ask students to debate Microsoft Copilot in order to develop their critical thinking;
- Engage students in AI-generated scenarios to create engagement and critical thinking;
- Ask students to evaluate AI solutions to a case study to support critical thinking;
- Investigate 2 different AIs-generated essays, looking at the strengths and weaknesses of each to develop students’ rhetorical awareness.
Faculty are also flipping the classroom with the aid of AI (see Jason Johnston’s on AI Assignment Flipping). For more, consider this crowdsourced slide deck of how teachers around use AI with their students.
Students are often early adopters of new technology. Your students may already be using AI tools to support them in a variety of academic and non-academic tasks. To hear how students are using AI tools, view the Fall 23 Workshop on AI Tools for Students recording.
- To learn what these tools can do, join AT and ITS's AI Literacy Program
- Cynthia Alby’s AI Prompts for Teaching: A Spellbook
- Antonio Bowen’s and Edward Watson’s book on teaching with AI (please send an email to ceetl@sfsu.edu for a free copy or come and read in the Faculty Studio in LIB 240)
- AI Pedagogy Project
- AI and Effective Teaching with Ethan Mollick
- CSU AI Teaching Commons
- Resources for GWAR instructors
- Lam Family College of Business Webinars and Workshops through the Emerging Technologies Initiative
To see how SF State faculty are innovating with and exploring AI in their classrooms, check out these recordings from SF State’s Symposium on AI hosted by CEETL in Spring 2024.
Each recording features an SF faculty member instructor who shared how they use AI in their course as well as lessons learned.
- Watch Professor Wes Bethel share about leveraging AI in the sciences
- Watch Professor Scott Campbell share about AI in the humanities
- Watch Professor Nasser Shahrasbi share about AI in business
- Watch Faculty Member Casondra Sobieralski share about AI in design
- Watch the whole playlist of faculty AI uses on YouTube
Faculty are also exploring the potential for AI to assist them in curricular design. CEETL’s Fall 23 discussion circle focused on exploring AI tools for faculty use. Participants shared how they used Generative AI to draft outlines for lessons, generate thank-you notes or recommendation letters, and summarize complex information.
Takeaways included:
- AI is a Starting Point: Faculty found ChatGPT useful for breaking creative blocks and drafting outlines.
- Customization is Essential: AI-generated content lacks personal voice and context, and AI is prone to hallucinations, requiring instructors to check for misinformation and personalize to their course and students.
- Privacy Concerns: Participants raised concerns about inputting personal information or student data into AI systems.
- Learning vs. Efficiency: Faculty worried that over-reliance on AI might undermine the learning process by prioritizing fast outcomes over deeper engagement with material.
- Ethical Guidelines Needed: Faculty emphasized the need for clear ethical guidelines in AI use, including ensuring students can opt out of AI-based assignments and acknowledging AI contributions in any scholarly or educational context.
CEETL’s Fall 24 workshop on "AI Tools for Faculty" takes a look at the benefits and limitations of using Co-Pilot to enhance teaching materials and increase student engagement. Facilitator Jennifer Trainor shared guiding principles, a demonstration of how to use and critique AI’s output when it comes to course materials, and led a discussion where faculty shared how they use AI in teaching.

Law Three: Protecting Student Learning


Protecting student learning from AI shortcuts is an essential challenge. We like this quote from Washington University’s Center for Teaching and Learning: “Sometimes students must first learn the basics of a field in order to achieve long-term success, even if they might later use shortcuts when working on more advanced material. We still teach basic mathematics to children, for example, even though as adults, we all have access to a calculator on our smartphones. GenAI can also produce “hallucinations” and often only a user who understands the fundamental concepts at play can recognize this when it happens. … Sometimes, the end product isn’t even the focus in education per se. Many assignments, for example, are much more about learning how to think and to make arguments than about simply creating a block of text on a particular topic. Like a runner training for a marathon, the point isn’t to just get from point A to point B, but to transform oneself along the way. A car might be faster, but it won’t build the muscle and endurance we seek. In these cases, it may make sense to adopt some teaching strategies that avoid AI and assignments that are more AI resistant.”
AI tools can produce authoritative-sounding outputs, and because these outputs sound so good, students can feel convinced by them, leading to situations where the output overrides or displaces students' own thinking. AI tools may curtail opportunities for students to develop and practice the kinds of thinking that undergird many learning goals. It is understandable that faculty are clamoring for AI detectors and prohibitive AI policies. But a focus on protecting student learning shifts us away from policing student behavior as a way to ensure learning, and instead invites faculty to think about how they might redesign assignments to provide spaces that protect learning and that encourage students to do their own thinking.
Providing and protecting such spaces undoubtedly poses new challenges for faculty. But we also know that protecting student learning from easy shortcuts is at the heart of formal education. Consider the planning that goes into determining whether an assessment should be open-book or open-note, take-home or in-class. These decisions are rooted in the third law: what would most protect student learning from shortcuts (textbooks; access to help) that undermine their learning?
-
Create student buy-in for learning
- Explore this revised Bloom’s Taxonomy, which distinguishes between AI capabilities and distinctly human skills at each level of thinking
- Teach, don’t police, academic integrity
- Adapt your syllabus policy
- Collections of syllabi policy statements from institutions across the country: here and here; moving beyond policy solutions to AI in the classroom
- Review this syllabus guidance doc
To see how faculty are working to protect student learning, view the Fall 2023 Workshop on Writing with Integrity recording here.
Read about protecting academic integrity in first-year writing: How I use AI to teach Academic Integrity, by Jennifer Trainor
Join us for the Second Annual AI Symposium on "Embracing and Resisting AI: Best Practices in the Classroom"