AI is already changing education in unimaginable ways—perpetuating an urgent need for school districts to establish policies regarding it, and for teachers and students to understand its benefits—and limitations.
But as AI continues to evolve, questions of educational justice and ethics arise. Consider:
- What appropriate ways can AI be used to boost student learning and not replace it?
- What can be done to eliminate the bias coded within AI programs?
- And will students of color and those from low-income households have equitable access?
The data paints a sobering picture. Research shows the divides in access to computers and broadband fall along racial and economic lines. AI algorithms also exhibit racial, cultural, and economic biases.
A recent Pew research study grabbed headlines with evidence that cheating had NOT increased since the advent of AI large language models. Yet an underlying and quietly mentioned set of facts that were embedded reinforced existing concerns:
– 72% of white teens had heard about ChatGPT compared to 56% of Black teens.
– In households earning $75,000 or more annually, 75% of teens were aware of ChatGPT, while only 41% of teens in households with less than $30,000 income knew about it.
It could be debated that the harsh reality of substantial inequity evidence is not gaining the headlines and instead, the lack of cheating evidence reinforces such inequities.
Building out protocols can help to reduce and mitigate the risks. Therefore, school communities should considering organization around a foundation of principles that ensure:
- All students are given access to the tools needed to succeed in their learning opportunities.
- Students are taught AI technologies that represent diverse perspectives and backgrounds.
Students and communities of color, and the schools that serve them, need to have a comprehensive understanding of AI’s impact, both positive and negative, so they aren’t left behind. This knowledge will empower educators, students, and their families to actively engage in shaping the future of education and advocating for their needs — including access to developing AI technologies.
Steps to ensure academic tools, content reflecting the broad cultural and neurodiversity of student populations, and ways to prevent the emergence of a new and even broader digital divide, should be carefully planned, well-communicated, and executed.
The impact of limited access to AI in education on student performance can be significant and multifaceted. Consider the ways this can affect students:
Limited Learning Opportunities
Students with poor access to AI tools may miss out on valuable learning opportunities. AI has the potential to enhance the educational experience by offering personalized learning resources, adaptive assessments, and intelligent tutoring systems. Without access to these tools, students may be at a disadvantage compared to their peers.
Widening Attainment Gap
The inequity in accessing AI tools can contribute to a widening attainment gap. Students who cannot afford or access AI tools may fall behind in acquiring the skills and knowledge reinforced by AI efficiencies and effectiveness. This likely causes disparities in academic achievement.
Skill Development Disparities
AI tools can help students develop essential skills such as critical thinking, problem-solving, and digital literacy. Students with limited access may struggle to acquire these skills, putting them at a disadvantage in the evolving world of AI.
Financial Barriers
The cost associated with AI tools can create financial barriers for students from economically disadvantaged backgrounds who may find access to these resources limited, further reducing their ability to leverage the benefits of AI in their learning experience.
Data Poverty and Connectivity Issues
Unreliable and inadequate data access, especially among disadvantaged families, presents a major challenge, interfering with full access to the technology that most families and learners need and rely on. This hinders access to critical learning and information gathering resources. This issue is prevalent in socially economically disadvantaged regions, including parts of urban areas, rural and deep south clusters, where data poverty serves as a significant barrier for these communities. Limitations to AI will compound this problem as those with access pull further away.
Concerns About Fair Access
Fair access and usage of AI tools indicates a growing awareness of the potential disparities.
Impact on Inclusivity and Diversity
If AI tools are not accessible to all students, it bears the cost of affecting inclusivity and diversity among educational experiences. Certain groups, such as those from marginalized communities or low-income backgrounds, may face additional barriers, perpetuating existing disparities in education.
Supporting Responsible AI Use
To address these challenges, school community stakeholders must work collaboratively to ensure equitable education of and access to AI tools and prevent the widening of the digital divide in education.
The Large Language Models (LLMs) that AI retrieves information from can appear to mimic human intelligence but in reality, they are combing the massive web and putting together responses to your command prompts (search) in ways to interpret your request. LLMs use statistical models to analyze vast amounts of data, learning the patterns and connections between words and phrases.
Solutions need to be broadcast and taught.
Providing awareness of and instructions for how to ensure individuals institute such processes maximizes equitable access because communities with larger barriers and the individuals within are less likely to face the divides that exist within the internet orbit that permeates our everyday experience.
Here are some additional strategic steps that school leaders and their communities can establish so that the divide is narrowed, even eliminated, to ensure access to AI resources and information is offered in a proportionate and balanced manner.
Responsible AI Use:
- Highlight potential risks, including implicit bias, and hallucinations. A hallucination is when an AI generates incorrect information as if it were factual. AI hallucinations are often caused by limitations in training data and algorithms, which produce content that is wrong or even harmful. An excellent tutorial on understanding hallucinations can be found here.
- Encourage critical thinking and fact-checking when using AI-generated content. Using resources such perplexity.ai and consensus.app are examples of AI resources that aim to point you to factual information. Teaching about these, and the process for fact checking are critical to the process. Check out this list of free reliable AI resources that I update monthly.
- Education and awareness. Commit to regular AI education for students and staff. Communicate the school’s appropriate use of AI, emphasizing fairness and safety.
Establishing a clear and equitable AI policy for schools is a proactive approach to access the benefits of AI while addressing potential risks, especially those of ethics. By setting clear guidelines, encouraging responsible use, and promoting learning experiences, schools can teach students to navigate the evolving digital landscape with ethical integrity and academic success as well as strike a balance in harnessing the power of AI and minimize the risk of deepening the digital divide.
By prioritizing equitable access to these resources and addressing potential biases, schools can harness AI’s remarkably imperfect productivity, so that all students, regardless of their background, have the opportunity to benefit from the advantages of AI-driven learning resources.