IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: The Pros and Cons of Using AI in Schools

Artificial intelligence is here to stay, and optimizing it for the classroom will require a careful accounting of its implications, both good and bad: for tutoring, assessments, data security and other functions.

Artificial intelligence concept showing the shape of a human head filled with various symbols.
There is a debate raging over the role that artificial intelligence (AI) should play in the future of education. Fans of the technology say schools need to embrace it, leveraging it to provide a more impactful educational experience, while critics worry that its use will result in a host of detrimental side effects.

As far as which view is correct, there is no easy answer. AI is not a one-size-fits-all solution; neither does it need to be an all-or-nothing solution. As with most technology, applying it safely and effectively requires a thorough understanding of the pros and cons.


REDUCING THE WORKLOAD FOR EDUCATORS


Across every industry, AI has been presented as a powerful labor-saving tool. It is already being applied in educational settings to manage a number of tasks typically assigned to teachers such as scheduling, tracking attendance and managing permission forms. Leveraging AI to manage those types of administrative duties frees up time for valuable teacher-student interactions.

AI, specifically language learning models like ChatGPT, has the potential to handle more advanced tasks, including grading essays, providing personalized tutoring and other student support, and identifying areas where course materials need to be improved. It is these types of applications that have raised concern among educators regarding job displacement. A balanced approach requires that any applications acknowledge the limitations of AI as well as the vital role that human teachers must continue to play in the learning process.

If not properly managed, AI can lead to a loss of human connection and personalized attention for students. But by regularly assessing student engagement and learning outcomes, teachers can help to ensure that AI is developed and deployed in a way that enhances education.


CREATING NEW PRIVACY AND DATA CONCERNS


Many of the benefits that AI promises to education involve assessing performance — both that of students and educators — and providing unique and personalized feedback designed to increase educational impact and close skill gaps. However, accomplishing those tasks requires that AI platforms have access to personal data. For some, that raises concerns about privacy and data security.

To address those concerns, it is important to ensure transparency. This includes establishing a clear understanding of how AI is being used in the classroom, what data it is collecting and from whom, and how that data will be used. A further concern is how AI protects against the risk of data breaches.

Finding the right gatekeepers for the development, deployment and management of AI in educational institutions is key to maximizing its potential benefits while minimizing risks. Regulators can set standards and guidelines for ethical AI use in education, administrators can ensure that AI aligns with institutional goals and values, and the marketplace can drive the development and use of the most effective AI tools. A combination of these actors is needed to ensure that AI is used in a responsible and effective manner in education.

For instance, AI-powered virtual tutors and platforms that provide personalized educational experiences can greatly enhance student learning and engagement, as well as improve access to education. As a result, many companies are already working on developing these types of solutions. Educational institutions must play a role in monitoring the impact of such applications, ensuring they are handling student data responsibly to avoid privacy concerns and potential loss of human connection.

It is also critical for educators and administrators to ensure that AI-driven tools do not imprint any biases into the educational experience. After all, AI systems are only as good as the data on which they are trained. When that data contains biases, the AI system may perpetuate them.

Some educational groups are already working on guidelines to help developers when it comes to avoiding bias and discrimination in AI software for education. They generally seek to identify biases in developers, ensure those biases do not get programmed into AI tools, and increase opportunities for educators to understand the risks that AI tools pose.

COMMITTING TO A MEASURED APPROACH


Overall, a measured approach will be the most beneficial for realizing AI’s full potential in education. AI promises to revolutionize education by making it more personalized, efficient and effective. Case studies have already shown it can be successful at decreasing study time, improving engagement around specific topics and tailoring better learning experiences.

Nevertheless, there are valid concerns regarding the ethical implications of AI in education, particularly around issues such as privacy and accountability. This is why it’s crucial to approach AI in education with caution, committing to extensive R&D and due diligence. The ultimate goal should be ensuring systems are fair, transparent and accountable.

Aaron Rafferty is the CEO of StandardDAO and co-founder of BattlePACs, a subsidiary of the blockchain investment company StandardDAO. He aims to help individuals, institutions and companies leverage technologies like blockchain, AI, cloud and social media, and to build products that enhance engagement and productivity for students.