Students and AI

By Daria Oleinikova

In our modern world, technology and technological innovation determine almost all areas of our lives. The rate at which new phone models hit the market determines how fast we get rid of our old phones. The availability of certain music artists on a streaming platform determines what music we listen to. The same thing is currently happening with Artificial Intelligence. The sudden expansion of the abilities of AI in the recent year has completely changed our day-to-day lives. People have started using AI chatbots to help them pick groceries, make to-do lists, create outfits, generate images, and texts.

Naturally, with AI becoming more normalized, accessible, and user-friendly, students of all ages have started using it as well from asking clarifying questions about a topic they did not understand, to asking it to write entire thousand-word essays for them. Modern students increasingly rely on AI chatbots for such tasks.

And while using AI to cheat and do all our work for us is still not considered acceptable, surely using AI as a prompt for ideas, or as a replacement for a search engine isn’t that bad?

In actuality, the morality and ethics of using AI in education is an extensively complex topic, exploration of which reveals the dangers of overuse, especially by young, developing minds. Concrete regulations and policies regarding the use of AI in education are needed to protect youth from losing important cognitive and critical thinking skills, as well as to protect our environment.

AI HARMS THE ENVIRONMENT

The problem with using AI, and especially chatbots like ChatGPT, Gemini, and others, starts at the very place where these programs operate data centers. A data center is a facility that often holds thousands of computer servers and other IT infrastructure needed to run various applications and services. These services include AI chatbots. But what makes these data centers so dangerous? While they have never been fully sustainable, the rise of AI has brought an especially dangerous flaw to the surface.

The servers of a data center get used every time a prompt is given to a generative AI system, and in turn generate not only thousands of possible answers, but also excessive amounts of heat. The usual method used to keep the servers from overheating involves water-cooling.

AI has been eroding the critical thinking skills of youth. Students who overuse AI tools may have impeded their development of critical thinking.

But can a single prompt about solving a math problem really have an impact? In reality, a single prompt, like writing a 100-word e-mail, can use more than a whole bottle of water. So just imagine how much water a single student wastes by using ChatGPT daily. Not a lot, you think? Well, some cities in America would not agree.

The growing popularity of AI is requiring companies to build more data centers to support their servers.

For these companies, this is not only much-needed space, but also an opportunity for a deal. Tech companies are buying land in financially compromised regions that cannot refuse the offer. These regions face a dilemma—whether to acquire more financial resources at the expense of their environment.

Data centers cause the surrounding regions to experience unbearable heat, droughts, and even loss of access to drinkable tap water in their homes. So, whenever someone uses ChatGPT or another generative AI model, they are adding to the suffering of thousands of people, no matter how small or insignificant their prompt seems.

Students who use AI every day are using gallons of water that are needed more than ever, when our planet is in a climate crisis with water shortages.

AI TOOLS CAN DIMINISH CRITICAL THINKING

What if a student is aware of the environmental consequences of using AI, but still decides to use it extensively in their studies? To answer that, we need to know which skills are essential, not only for being a good student but for having a sense of self. Thinking critically means being able to evaluate and analyze the world around you, being able to access and make decisions in unfamiliar environments and situations. Critical thinking promotes independence, leadership, confidence, and other essential skills.

But AI has been eroding the critical thinking skills of youth. Students who overuse AI tools may have impeded their development of critical thinking.

This is where the concept of cognitive offloading comes in. It occurs when a person delegates tasks with a heavy cognitive load to an external tool. Such tools can include calculators, note-taking programs, AI tools, and chatbots. While moderate amounts of cognitive offloading do no serious harm to a person’s mental state, overindulgence in AI tools has been shown to lead to greater cognitive offloading, which in turn leads to a decrease in critical thinking skills.

Such a loss of critical thinking could potentially diminish memory retention, lower critical engagement, reduce attention span, and even foster a blind trust in AI tools. These concerns relate to the concept of an “Aligned Individual,” a person whose internal systems body, mind, heart are aligned. They are aware of their potential, know their limits, and can use AI wisely to their advantage, instead of being used by it. Only an “Aligned Individual” can use AI beneficially or even decide whether they need to use AI at all.

USING AI CAN HARM THE INFORMATIONALLY ILLITERATE

Unfortunately, the AI rabbit hole goes even deeper. The ones who will face the most negative consequences when utilizing AI for studies are the students who already struggle with critical engagement. A substantial part of critical engagement is informational literacy the ability to understand and apply information efficiently and beneficially. Students who lack this skill prior to engagement with AI are more vulnerable to the negative impact of over-relying on AI.

The research of Kooli and Chakraoui shows that, despite the benefits of AI in the sphere of accessible education, digital literacy is commonly diminished across multiple areas of AI research.

…utilizing AI for studies will be the students who already struggle with critical engagement.

With the adoption of AI tools, students who might already be over-using automation are more likely to have the negative effects exaggerated.

These students may start experiencing lower engagement in deep, reflective learning, become less comfortable applying themselves in unfamiliar situations, and engage in fewer new tasks. Higher education students often use AI tools like ChatGPT without fully understanding how the tool works, or its impacts on them or the world around them. This unawareness can have major negative consequences.

COUNTERCLAIM AND REBUTTAL

On the other hand, some observers claim that adaptive AI-powered learning platforms can tailor educational content to the needs of the student, improving learning compatibility and the students’ performance. Moreover, AI can collect study materials, exposing the user to diverse and credible sources.

Rebuttal: Admittedly, this is partly true; AI can, in fact, provide personalized learning. However, one must still be concerned about the reliance on such tools. While AI models can improve basic acquisition, they cannot foster deep and reflective analysis, allow for trial and error, practice, or enable students to apply their knowledge in complex, unfamiliar situations. Using AI tools for the sake of simplification backfires too often to be considered a good learning tool.

Unfortunately, AI has even been shown to have algorithmic biases that directly affect the results it will show you. If the AI model has been trained on outdated materials or has a built-in bias, it will often give the user heavily biased results. Such an informational echo chamber can hinder the user’s critical evaluation and encourage their confirmation bias. This phenomenon can be exceedingly harmful, especially to younger students who may already be lenient toward a harmful bias.

EMPIRICAL RESEARCH

To illustrate these points, I carried out a 25-question survey on my classmates. The results of the survey confirmed many of my claims. All the participants reported being between 17 and 19 years of age. 91.6% reported using AI for studying, and 75% believe that AI tools allow students to engage in deep and reflective learning. However, 41.7% of the participants also reported that they would struggle with their studies if AI tools became unavailable to them, with 25% acknowledging that they use AI too much.

This shows an underlying over-reliance on AI tools by the participants, 91.7% of whom also reported noticing algorithmic or confirmation bias in their AI tools at least once. Only 41.7% reported always checking the answers they receive from AI, which indicates that participants may be blindly trusting their AI tools.

CONCLUSION

My argument has been substantiated. Generative AI models should not exist as freely as they do now in the sphere of education, for the sake of the students, whose abilities are critical to their development and to the health of our environment.

Peace Magazine

Peace Magazine , page . Some rights reserved.

Search for other articles by kgsimons here

Peace Magazine homepage