Principles for AI in Education

In an era when “many priorities for improvements to teaching and learning are unmet,”[1] artificial intelligence (AI) technologies hold past success and vast promise to enhance a learner’s educational experience, an educator’s success, and family involvement.[2] For years, these technologies have helped to enable critical tools for personalized learning experiences and assistive technologies.

SIIA believes that the successful deployment of AI technologies in education must be done in a way that supports those who use it, protects innovation in the field, and addresses the risks associated with the development and use of these new tools. AI should replace neither the educator nor the learning experience. The Education Technology Industry’s Principles for the Future of AI in Education builds on experiences with and successes in using these technologies to advance educational objectives. These principles provide a framework for how we can look to the future of implementing AI technologies in a purpose-driven, transparent, and equitable manner.

From tutoring and test preparation to assessing learner performance to relatively simple tasks like checking the spelling and grammar of a document, AI technologies are and can have great impact on teaching and learning.[3] Because of this and in order to realize AI’s promise, stakeholders must address and mitigate risks attendant to these technologies. These principles intend to guide the ed tech industry as the broader education community continues toward deploying these impactful technologies.

Principle 1: AI technologies in education should address the needs of learners, educators and families.

The use of AI in education should be purpose-driven for the community it serves. Whether these tools are used to provide quick feedback to learners on their work product so they can continue learning, assist teachers in honing their craft, or help families[4] engage with the school community, the use of AI technologies should be implemented to address a need of a learner, an educator, families, or the greater community. This includes incorporating established, modern learning principles and design, where appropriate.

Principle 2: AI technologies used in education should account for educational equity, inclusion and civil rights as key elements of successful learning environments.

AI tools in education should treat each person fairly and actively work to mitigate bias and unjust impacts. AI tools should align to existing frameworks established by state and federal laws - including accessibility and civil rights laws - to properly protect and enhance the educational experience for each learner.

Principle 3: AI technologies used in education must protect student privacy and data.

AI technologies used in education should adhere to state and federal laws designed to ensure the security and privacy of student data.

Principle 4: AI technologies used in education should strive for transparency to enable the school community to effectively understand and engage with the AI tools.

Understanding what AI tools are and what they do, will empower learners, educators, and families to have the ability to engage effectively. Company procedures should include policies to incorporate transparency and responsible disclosure regarding AI systems used for a school community wherever possible.

Principle 5: Companies building AI tools for education should engage with education institutions and stakeholders to explain and demystify the opportunities and risks of new AI technologies.

Companies should continue to engage with educators on how these technologies can improve teaching and learning, address the risks and limitations of the technologies, and provide a forum for responding to concerns raised by stakeholders.

Principle 6: Education technology companies and AI developers should adopt best practices for accountability, assurance, and ethics, calibrated to mitigate risks and achieve the goals of these Principles.

The deployment of these technologies will require companies to establish procedures to ensure the ethical development and use as well as a purpose-driven application. Whether those internal procedures include bias testing, review boards, or other mechanisms, like requiring a human in the loop, these processes should be understood by all internal stakeholders and updated with regular frequency.

Principle 7: The education technology industry should work with the greater education community to identify ways to support AI literacy for students and educators.

The education technology industry should support efforts to enable learners across demographics and the socio-economic spectrum to be equipped with diverse skills to engage with an ever-changing world that uses AI technologies. The industry should also support efforts to prepare future educators and current educators with courses and other professional development experiences to help them feel comfortable navigating all technologies and empowered to effectively use them to enhance a learner’s experience. These efforts should be taken with a focus on developing critical thinking skills for navigating AI sources of information and information consumption in a safe and mindful manner.

[1] U.S. Department of Education, Office of Educational Technology, Artificial Intelligence and Future of Teaching and Learning: Insights and Recommendations, Washington, DC, 2023. (

[2] Some areas where you may see AI in the classroom include personalized learning tools, augmented and virtual reality, automated grading, or assessment and analytics. For more examples, please visit our website at [AI PRINCIPLES ].

[3] AI in education refers to the tools used in teaching and learning inside and outside the four walls of a classroom. AI technologies may be used in the development or deployment of software used in an educational setting but are not part of the teaching and learning experience.

[4] This document refers to “educators” and that language is intended to be inclusive of all school personnel that may impact teaching and learning like classroom teachers, university professors, paraprofessionals, and administrators.

[5] This document refers to “families” and that language is intended to be inclusive of legal guardians and caregivers of minors.

[6]  Artificial intelligence (AI) literacy refers to the knowledge, skills, and attitudes associated with how artificial intelligence works, including its principles, concepts, and applications, as well as how to use artificial intelligence, such as its implications, limitations, and ethical considerations. Foundational elements of AI include computer science, as well as data science, engineering, statistics, and psychology. AI education aims to equip individuals to engage productively and responsibly with AI technologies in society, the economy, and their personal lives.;, CoSN, Digital Promise, European EdTech Alliance, Larimore, J., and PACE (2023). AI Guidance for Schools Toolkit (Handout). Retrieved from Oct 24, 2023

Shopping Basket