As students across Texas undergo their annual state-mandated exams, a significant change is underway: the introduction of an artificial intelligence-powered scoring system poised to replace a substantial portion of human graders in the region.
Reported by The Texas Tribune, the Texas Education Agency (TEA) is spearheading the adoption of an "automated scoring engine," leveraging natural language processing akin to technologies used in chatbots like OpenAI's ChatGPT. This innovative system aims to assess open-ended questions on the State of Texas Assessments of Academic Readiness (STAAR) exams, with projected annual savings of $15–20 million by reducing reliance on temporary human scorers.
The redesigned STAAR exams, featuring fewer multiple-choice questions and a significant increase in open-ended queries, pose logistical challenges in scoring due to their time-intensive nature, as highlighted by TEA director of student assessment, Jose Rios.
Utilizing a dataset of 3,000 exam responses, each previously evaluated by human graders, the AI scoring system underwent rigorous training. To ensure accuracy and fairness, TEA has implemented safeguards, including a rescore process for a portion of computer-graded results and responses that confound the AI algorithm.
Despite TEA's optimism regarding cost-saving benefits, concerns loom among educators. Instances of constructed responses receiving zero scores during a limited trial in December 2023 have raised apprehensions, as noted by Lewisville Independent School District superintendent, Lori Rapp.
While AI essay-scoring engines have been utilized in various states since 2019, TEA emphasizes distinctions between its closed system and traditional AI, aiming to assuage fears of unfair assessment practices. However, skepticism persists, fueled by broader societal concerns surrounding AI's role in education and potential implications for student learning experiences.
More: https://www.theverge.com/2024/4/10/24126206/texas-staar-exam-graders-ai-automated-scoring-engine
