(TNS) — School districts and vendors agree that the lack of clear standards for the use of artificial intelligence in education creates risks for both parties.
As it stands, education companies looking to bring AI products to market must rely on a hodgepodge of guidelines put forward by various organizations. At the same time, data privacy, information accuracy, and transparency.
However, there is a collective movement seeking clarification. Many edtech organizations have banded together to draft their own guidelines to help providers develop AI products responsibly, and school districts are increasingly addressing the standards they require from vendors during meetings and product solicitations. I’m starting to speak up.
“Standards are just beginning to enter the conversation,” said Pete Just, a former school district technology administrator and former president of the School Networking Consortium, an organization representing K-12 technology stakeholders. spoke. If they exist, “they’re very generalized,” he added.
“We are seeing the Wild West evolve into something a little more civilized, and that will benefit our students and staff as we move forward.”
In EdWeek Market Brief, we speak with edtech company leaders, school system officials, and advocates for stronger AI requirements to find out where current standards fall short, and potential legal requirements and guidelines that companies should be aware of. We discussed the need to create one. In a way that keeps pace with rapidly evolving technology.
Best practices and moving goals
Many organizations have announced their own sets of artificial intelligence guidelines in recent months, as groups work on what are considered best practices for AI development in education.
One coalition that has grown in recent years is the EdSafe AI Alliance. This is a group of education and technology companies working to define the AI landscape.
Since its founding, the group has published the SAFE Benchmarking Framework, which serves as a roadmap focused on AI safety, accountability, fairness, and effectiveness. We also propose AI+Education Policy Trackers, a comprehensive collection of state, federal, and international policies related to schools.
A coalition of seven edtech organizations (1EdTech, CAST, CoSN, Digital Promise, InnovateEDU, ISTE, and SETDA) also announced a list of five quality indicators for AI products with a focus on ensuring safety at this year’s ISTE conference. I did. These include standards such as , evidence-based, comprehensive, usable, and interoperable.
Other organizations are also drafting their own versions of AI guidelines.
The Consortium for School Networking has created an AI Maturity Model to help school districts determine their readiness to integrate AI technology. The Software and Information Industry Association, a leading organization representing vendors, has released Principles for the Future of AI in Education, which aims to guide vendors’ AI implementations in a purpose-driven, transparent, and fair manner. ” was announced.
In January, 1EdTech published a rubric that serves as a self-assessment for suppliers. This guide helps edtech vendors identify what they need to look out for when responsibly incorporating generative AI into their tools. It’s also designed to help school districts better understand what types of questions to ask ed-tech companies.
Beatriz Arnilas, vice president of product management at 1EdTech, said that when the assessment was developed, some of the areas of focus were privacy, security, and the safe use of AI applications in the education market. Masu. But as technology advanced, her group realized that conversations needed to include more.
Are users within the district informed that the product incorporates AI? Are there options to opt out of the use of artificial intelligence in the tool, especially if it may be used by young children? Models Where is the data being collected? How does the AI platform or tool control bias and hallucinations? Who owns the prompt data?
The organization will soon release a more comprehensive version of the rubric that addresses these updated questions and other features applicable to considering a wide variety of artificial intelligence in schools. This updated rubric, unlike 1EdTech’s previous guides, is built in smaller sections so you can quickly change parts of it without having to revise the entire document as AI evolves.
“This shows how fast AI is developing. We realize there is a need for more out there,” Arnilas said.
1EdTech also compiles a list of groups that have issued AI guidelines, including advocacy groups, university systems, and state departments of education. The list of organizations identifies the intended audience for each document.
“The goal is to establish a ‘systematic effort’ to promote the responsible use of AI. The goal is to save teachers’ time and provide high-quality education to students who typically do not have access to it.” “Providing access to education,” Arnilas said.
Federal policy in action
Some of the standards that edtech companies are likely to follow when it comes to AI will likely be set by federal mandates rather than by school districts or advocacy groups.
Erin Mote, CEO and founder of innovation-focused nonprofit InnovateEDU, says there are several initiatives vendors should take note of. One is the possibility of signing into law the Children’s Online Safety Act and the Children and Youth Online Privacy Protection Act (known as COPPA 2.0). This federal law is a major change and could have an impact on how students are protected online. For data collected by AI.
Vendors should also be aware of the Federal Trade Commission’s recent crackdown on children’s privacy, which will impact how artificial intelligence handles sensitive data. The FTC has also issued a number of guidance documents specifically regarding AI and its use.
Ben Wiseman, an associate director at the institute, said: “In practice, if AI doesn’t meet the evidence for claims about whether it works in a certain way or whether it’s free of bias, “We have a policy that says products cannot actually claim to have AI in them.” The FTC’s Division of Privacy and Personal Data Protection spoke in an interview with EdWeek Market Brief last year.
In addition, providers should be familiar with recent regulations regarding web accessibility released by the U.S. Department of Justice this summer. AI developers say the technology must comply with guidelines to make content available to people with disabilities without restrictions. Creative and inclusive technology.
The U.S. Department of Education also released non-regulatory guidelines on AI this summer, but more specific regulations are still in the early stages, Mort said.
States are also beginning to take more initiative in distributing guidelines. According to SETDA’s annual report released this month, 23 states have issued AI guidance to date, making artificial intelligence standards the second-highest priority for state leaders after cybersecurity. It is being
Holding Vendor Accountability Through RFPS
Meanwhile, school districts are tightening expectations for AI best practices through calls for proposals they are submitting for ed-tech products.
“They no longer ask, ‘Are you documenting all your security processes? Are you protecting your data?'” Mort says. “They’re saying, ‘Explain it.’ This is a deeper level of sophistication than I’ve ever seen in terms of enabling and asking questions about how the data moves. is.”
Mort said he has seen such changes in RFPs put out by the Education Technology Joint Authority, which represents more than 2 million students across California.
The language asks vendors to “describe their proposed solution to support full access for participants to extract their own user-generated systems and usage data.”
The RFP also has additional provisions specifically addressing artificial intelligence. If an ed-tech provider uses AI as part of its collaboration with a school system, it “does not have the right to reproduce and/or use the provided (student data) in any way for the purpose of training artificial intelligence.” states. You may not use technology or generate content without first obtaining permission from the school district.
Mort said the RFP is an example of the district “trying to stay ahead of the curve rather than just cleaning up.” “Edtech solution providers will be asked to give more specific and direct answers. No longer just a checkbox of yes or no, but a question of “Give me an example.” It’s a thing. ”
Jeremy Davis, deputy director of the Office of Educational Technology Integration Authority, agreed with Mort and is moving toward forcing districts to undergo their own set of detailed reviews of AI procurement.
“We always need to know exactly what they’re doing with our data,” he says. “Not one ounce of data should be used in any way that the district has not consented to.”
back to basics
Despite the lack of industry-wide standards, education companies looking to develop AI responsibly would be wise to follow basic best practices for building solid education technology, officials said. I am. These principles include planning for implementation, professional learning, inclusivity, cybersecurity, and more.
“Currently, there is no certification body for AI, and we don’t know if that will ever happen,” said Julia Fallon, executive director of the Association of State Educational Technology Directors. “But it comes back to good technology. Is it accessible? Is it interoperable? Is it safe? Is it secure? Is it age-appropriate?”
Jeff Streber, vice president of software product management at education company Savvas Learning, said the ultimate goal of all of his company’s AI tools and features, like any product, is effectiveness.
“You have to be able to prove that your product makes a tangible difference in the classroom,” he says. “We remain focused on the goal of improving teaching and learning, even if (the district’s) AI policy is not yet very advanced.”
Savvas’ internal guidelines on how to approach AI are influenced by various guides from other organizations. The company’s AI policy focuses on transparency in implementation, a Socratic style that encourages responses from students, and goes beyond overarching concerns like guardrails, privacy, and avoiding bias to specific questions about the district’s needs. Streber said.
“The state guidelines and the federal Department of Education guidelines are helpful in the grand scheme of things,” Streber said. “However, it is important to accurately identify with your own sense of the more specific questions that generalized documents cannot answer.”
As AI develops, “standards need to keep up with the pace of change, or they will become irrelevant.”
Ian Zhu, co-founder and CEO of SchoolJoy, an AI-powered education management platform, says a detailed understanding of how school districts work will also be important as AI standards evolve. .
Common AI frameworks for curriculum and safety aren’t enough, he says. AI standards should be developed considering different types of district contexts, including how AI technology is used for things like strategic planning and finance.
“The conversation around AI is too open-ended, and we need more constraints at this point,” Zhu said. “But to keep students safe and use AI in an ethical way, we need to consider both the guidelines and consequences and the standards we uphold.”
©2024 Education Week (Bethesda, Maryland). Distributed by Tribune Content Agency, LLC.