Artificial Intelligence in School: Five Questions Educators Should be Asking
December 1, 2023
December 1, 2023
By Jesse Senechal, Mary Strawderman, Jonathan Becker, Chris Parthemos, Samaher Aljudaibi, and Oscar Keyes
ChatGPT’s public release a year ago unleashed a flood of media coverage speculating on the impact that Large Language Models (LLM) and other forms of generative artificial intelligence (GenAI) could have on our educational system. Today, some observers are AI optimists, claiming that these technologies will lead to enhanced forms of personalized learning that can supercharge student performance, effectively addressing both learning loss and the ongoing educational disparities that have plagued our system. Others are more pessimistic, warning of increased plagiarism, the potential of AI replacing educators, and threats to student privacy, among other concerns.
Technology-focused educational reforms have often led us to take a wait-and-see attitude in the past, but there are reasons that, as educators, we may want to learn more about AI. We’re going to be hearing a lot more about this topic in the coming year and one of the factors we should consider is its timing. LLMs and other AI technologies are being introduced to a public school system that is under tremendous stress from pandemic-related mental health challenges and learning loss, historic teacher shortages, and policy agendas that support private sector approaches to public education. All these factors offer fertile ground for the quick integration and expansion of AI in education.
In this article, we’ll present a set of questions to ask as we experience the rollout of AI in our public schools. Our goal here isn’t to fully answer these questions, but rather to encourage you to begin asking them. We believe it is critical for those that work in classrooms and schools to develop a professional perspective on the use of LLMs and other forms of AI, so that when opportunities arise, they can speak with professional authority about the benefits and risks. A recent report from the US Department of Education’s Office of Educational Technology, entitled “Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations,” presents foundational principles for developing an approach to AI in education. Interestingly and, we think, accurately, the very first such principle is to Center People (Parents, Educators, and Students).
Our hope is that the five questions that follow will help teachers and other educators develop their perspectives on the use of AI LLMs, in ways that center their voices, and the needs of students and parents, in the upcoming debate.
Here are some key points, and we encourage all educators to learn more about how this technology works. LLMs are a form of generative AI that use algorithms trained on large data sets of text, typically scraped from the Internet. In the case of ChatGPT, the LLM is used to perform natural language processing (NLP) tasks, which enables the program to interpret, process, and generate human-sounding content in response to a user’s prompt. The output from an LLM is a prediction, with the program essentially asking itself, ‘What would be the best (most predictable) next word?’ again and again. This process continues until a whole block of text (a book report, a poem, a recipe, etc.) is generated. One way of thinking about it is as a very fancy auto-complete that doesn’t stop until the assigned writing task is finished. This process is why, for example, when you read output by ChatGPT, the writing can be so flat and generic. It is, by design, a generic response to a prompt. The response is also limited to predictions from the dataset the model is trained on.
For this reason, using ChatGPT requires precise prompt composition in order to provide specific answers about specific issues; the more accurate and specific these prompts, the better the result. It is also important to know that LLMs have certain human-designed guardrails in place to prevent inappropriate content from being generated. You can imagine why this would be needed when considering that the internet (i.e., the training data for LLMs) is full of ideas that challenge social norms. These guardrails vary by model and have been shown to break down in certain cases. There is also no shortage of debate about what guardrails should and should not be in place.
While this basic discussion of LLMs may be a good starting point, we encourage all educators to learn more about how these models work. We suggest doing some research—there are many great discussions of AI LLMs geared toward non-computer scientists—and it’s also important to actually use the tools. If you have not already done so, we strongly recommend setting up an account and experimenting to see what it can and cannot do.
Ask how LLMs might support your work by automating tasks such as lesson planning, assignment design, grading, and parent communication. Many LLM advocates talk about them as efficient personal assistants because they can generate plans, suggest effective teaching strategies, and tailor instructional materials to learning objectives. They can also create educational content, personalize it to fit individual learning styles, and adapt lessons for students with disabilities or non-native English speakers, promoting inclusivity. In instruction delivery and assessment, LLMs can provide personalized tutoring, assist in grading, and identify assessment biases. In parent and community engagement, LLMs can craft outreach materials and facilitate communication with non-English speaking families by translating content into multiple languages. Moreover, LLMs can automate administrative duties like generating reports, relieving manual workload, and assisting in the development of educational policies and programs, fostering a more efficient learning environment
It should be noted how these tools could be used to support students and to learn how students are already using them. While many educators are primarily focused on plagiarism, some students have noticed that AI LLMs can support their self-advocacy, helping them draft emails or practice making important requests of their teachers. Others have used LLMs to provide direct support for learning: In language, for example, they can serve as a conversation partner, allowing practice of new vocabulary or syntax. In the liberal arts, LLMs can allow students to ‘speak’ in hypothetical discussions with historical figures, artists, or literary characters. LLMs can support writing as a partner rather than replacing student effort. This support can include helping to generate outlines, identifying sources or common themes in well-researched areas, and providing detailed feedback on spelling, grammar, syntax, and logic in early drafts.
As AI and LLM technology evolve in education, their applications are likely to expand, further transforming the practice of teaching and learning. In a time when teachers are overloaded with work, it is important that teachers advocate for AI and LLM approaches that support student learning as well as create efficiency in administrative and planning tasks, and ultimately make teaching a more sustainable profession.
While there are potential benefits, we encourage you to bring a healthy degree of skepticism to the conversation about the school uses of LLMs, like ChatGPT. There are several key areas of concern.
First is privacy, for both students and teachers. When users enter prompts that may include personally composed text or identifiable information, privacy is not guaranteed. The data that users enter are processed along with other media or types of communication to train the models. The use of LLMs should comply with relevant regulations, such as the Family Educational Rights and Privacy Act (FERPA), to ensure that sensitive information about students is not exposed. This may require screening any documents or text to remove proprietary or sensitive information, such as student names and academic performance, prior to inputting them into the model. There are also concerns about the intellectual property of teachers. For example, a teacher may load a lesson plan into an LLM, and prompt it to adapt it for a particular student. That lesson plan is now part of the training data, and the ideas may be shared without teacher consent, credit, or compensation.
A second concern is the potential of LLMs to undermine the pedagogical value of the writing process. Writing serves as the backbone of our educational system, integral to the curriculum from the early grades through secondary and postsecondary education. Not only does it constitute a primary method for organizing learning experiences, but written composition also frequently becomes a central tool for assessing students’ grasp of a subject. Beyond this, writing develops voice and perspectives, serving as a mirror reflecting our identities. There is certainly cause for concern because a student, with little effort, can prompt the model to write an essay on the causes of the Civil War, craft a poem mirroring Langston Hughes’ style, or pen a persuasive letter advocating stricter gun control policies to a local newspaper. This not only raises fears about the potential misuse of LLMs for cheating, but also leads to an even larger worry about how the technology might alter both the process and product of writing, potentially undermining its central educational value.
Finally, we should consider the ways that LLMs and other forms of AI might affect student-teacher relationships. We know from prior experience with new educational technologies (e.g., mobile devices) that the benefits of these tools are often overtaken by their ability to detract from classroom community. LLMs add another layer to this problem; they’re designed to be conversational and engaging. Thus, they have the potential to pull students further into their screens and away from person-to-person interactions.
All education reform efforts should be considered through an equity lens. With AI and LLMs, there’s a wide range of questions to be considered.
As we’ve seen, new technologies can be unevenly distributed across educational contexts, perpetuating existing inequities. Given our current digital divide, it is not hard to imagine scenarios where certain, well-resourced schools have well-developed AI instructional approaches, while others, due to a lack of technological infrastructure, are left behind. Therefore, guaranteeing equal access and appropriate use among diverse student cohorts becomes crucial. This involves promoting fair and inclusive access across different schools and classrooms.
An additional equity-focused concern has to do with the content generated by these tools, which may exhibit biases reflecting the worldview of the data they are trained on, or contain inappropriate content. This means that LLM outputs can include racial, gender, age, socioeconomic, and political biases, among others, which could negatively impact students. In traditional classroom situations, teachers have some control over classroom content. As we start to use LLMs in educational settings, we may be ceding some of that control. Monitoring and assessing LLM responses for bias and engaging students in open conversations about such bias allows for a more comprehensive understanding of the technology’s limitations.
In light of both concerns, educators need to consider whether these technologies are transforming our schools in ways that address both the achievement and opportunity gaps in public education. Educators can be at the forefront of addressing these issues by asking critical questions about classroom use of LLMs and their impact on equity in the teaching and learning process.
Growing awareness of AI and LLMs means policy responses, and we encourage teachers to engage in this conversation.
One starting point should be a review of existing policies to ensure, among other things, that any new policy does not conflict with a current one. Redundancy may be acceptable in some cases but, often, it is better to update existing policies rather than craft new ones. As one example, if there’s concern around generative AI and plagiarism, it may be that the issue is already addressed in a student code of conduct. If so, there may be brief language that can be added to make it clear how, if at all, the use of AI to assist in written work without proper attribution could be considered plagiarism.
In new policies and updated existing ones, language should be clear and precise. For example, a policy on the use of generative AI by students should not be limited to, say, ChatGPT. That is one example of generative AI, but there are others, and more will emerge. Therefore, broad and inclusive language about generative AI may be more appropriate. Additionally, students and families need to be notified of the new or updated policies. Any disciplinary actions around the use of AI should follow proper procedures, and due process always involves proper notification.
As our education system moves forward with the development of policies and strategies that will impact the work of teachers and their work with students, we encourage teachers to learn more about these policy discussions, who is involved, and what is on the agenda. As we have argued above, there are a wide range of issues that need to be considered, and it is important that these discussions incorporate teacher- and student-centered principles. This involves defining teacher roles and responsibilities when using LLMs in teaching and learning, educating teachers with knowledge to integrate LLMs ethically into teaching practices, establishing processes to manage, monitor, and communicate LLM use and potential risks to students, and leveraging tools that enhance LLMs’ effectiveness and reliability.
Jesse Senechal is the director of the Metropolitan Educational Research Consortium, a research partnership between Richmond-area school divisions and Virginia Commonwealth University’s School of Education. Mary Strawderman is a research development administrator in the Office of the Vice President of Research and Innovation at VCU. Chris Parthemos is the assistant director of Student Accessibility and Educational Opportunity in VCU’s Disability Resource Office.
Some resources to expand your AI knowledge:
Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. Based on extensive interviews with experts in education and education technology, this report by the U.S. Department of Education’s Office of Educational Technology describes AI and its potential school uses while encouraging educators to think critically about both opportunities and risks.
AI Is Going to Upend Public Education. Or Maybe Not. The education podcast Have You Heard presents an interview with Larry Cuban, a public education historian who has written extensively about technology-based school reform efforts. Cuban suggests that the current hype surrounding AI will fade as we realize that AI integration will only happen incrementally, and with unpredictable outcomes.
How AI could save (not destroy) education. A TED Talk by Sal Khan, the founder and CEO of Khan Academy. He argues that artificial intelligence has the potential, through individual tutoring, to transform teaching and learning in ways that lead to dramatic gains in student achievement.
Balancing the Benefits and Risks of AI Large Language Models in K12 Public Schools. Published by the Metropolitan Educational Research Consortium, this brief considers the potential impacts of AI Large Language Models on public schools including implications for teaching and learning, and considerations for school district policy.
According to the Economic Policy Institute, teachers in Virginia earn 67 cents on the dollar compared to other (non-teacher) college-educated workers. Virginia’s teacher wage penalty is the worst in the nation.
Learn More