ÉçÇøºÚÁÏ

Explore

Time for Schools to Stop Testing Kids Like It’s the 1990s. AI Can Show the Way

Swift: New federal waivers are an opportunity for states to update antiquated assessments and reimagine how to effectively measure student learning.

Facebook

Get stories like this delivered straight to your inbox. Sign up for ÉçÇøºÚÁÏ Newsletter

When the U.S. Department of Education granted Iowa a from key federal education mandates this year, it didn’t only signal a shift toward greater state flexibility. It marked a watershed moment for state chiefs to seize the opportunity to update antiquated assessments and reimagine how to effectively measure student learning.

Traditional end-of-year exams were intended primarily for accountability purposes. While useful, they were never built to provide a real-time understanding of student growth or inform instruction. But despite technological advances, they’ve largely defined state assessment systems for more than two decades.

First enshrined in federal policy through The No Child Left Behind Act, these rear-facing tests offer a snapshot of performance after learning has occurred, giving educators little opportunity to adjust teaching or address gaps in real time. Preparation for these exams eats into valuable classroom learning time, and the cost of purchasing and administering them represents a major line item on state . At times, their shortcomings have for high standards.

Back when these exams were developed, schools had few options for addressing the staggering and unacceptable achievement gaps facing students. And, in places like Massachusetts, where I helped craft and implement education reforms three decades ago, the balanced carrot-and-stick approach did lead to improvements for some time. 

The Massachusetts Comprehensive Assessment System, which I helped steward, was one of the nation’s first statewide testing programs to quantify academic performance. The MCAS was critical in driving achievement across the state, and Massachusetts students scored at the highest levels in math and English on the . Our state was named a national model by the U.S. Department of Education, and the MCAS served as a blueprint for other states. 

This was in the mid-1990s, and statewide assessments — albeit clunky, expensive and time-consuming — were as revolutionary as the car phones being installed in center consoles. For the first time, states could provide parents, educators and policymakers with objective data for identifying failing schools and specific student needs.

But while mobile phones and Blackberries have long since replaced car phones, the world of education has clung to the same, outdated approach to testing. What’s worse, rather than replace the exams, states and districts — faced with both an appetite for data and the limitations of end-of-year testing — layered in more assessments in the form of interim exams, diagnostic tests and progress-monitoring tools. The goal was to generate more information that could be used immediately to inform instruction. The result, too often, has simply been more testing. At the risk of extending my car phone analogy too far, they bought and used fancy new iPhones while still paying for and insisting on using that old car phone when making calls on the road. 

Today, states and schools have an opportunity to do this differently. AI-enabled assessments, for example, can listen as students read aloud during normal classroom practice, continuously gauging fluency, accuracy and comprehension in real time — so the assessment is happening while learning is happening, not taking its place. This, in turn, can equip educators with immediate feedback to make lessons more effective and ensure that what’s being tested aligns with state standards.

In the past, efforts to adopt new measures have been thwarted by federal regulatory constraints and limited technological capabilities. But many of those longstanding barriers to innovation are disappearing. Today, artificial intelligence makes the need for repositories of test questions a vestige of the past, instead automatically adjusting difficulty levels based on a student’s answer.  And speech recognition allows a once unimaginable capture of early literacy progress — even for children who cannot yet take a test. 

Perhaps most importantly, the current administration has made clear that states now have the flexibility to explore new options. The Every Student Succeeds Act still requires annual testing, but it also permits states to choose between administering a large-scale, end-of-year exam or multiple interim tests that combine their results into a single score.

states are already using this flexibility. recently adopted a model that assesses students at the beginning, middle and end of the school year to measure progress. recently received federal approval to pilot a similar model that gives students multiple opportunities to demonstrate mastery and provides educators with frequent insights about where their students might need support. are exploring similar approaches. 

With modern technology, these models can go even further. Computer-based assessments can now capture the voice of a student reading aloud, solving a math problem or completing a writing task to generate meaningful data without pausing learning for a separate exam. The result is not simply more information, but more useful information — timely, aligned to core math and reading standards, and capable of providing timely feedback about where students are succeeding and falling behind.

These assessment models are already operating in classrooms across the country, and they can produce reliable measures of growth while correlating strongly with traditional benchmarks. More importantly, they give teachers insight when it can still change outcomes for students. That distinction matters — especially now.

National data show troubling declines in math and reading performance. Achievement gaps are . High school students are performing below levels seen two decades ago. In this environment, diagnosing learning loss in June is insufficient. Educators and students need systems that accelerate learning in October.

An assessment should not be an autopsy. It should be a compass.

Did you use this article in your work?

We’d love to hear how ÉçÇøºÚÁÏ’s reporting is helping educators, researchers, and policymakers.

Republish This Article

We want our stories to be shared as widely as possible — for free.

Please view ÉçÇøºÚÁÏ's republishing terms.





On ÉçÇøºÚÁÏ Today