Artificial Intelligence – 社区黑料 America's Education News Source Fri, 17 Apr 2026 18:53:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png Artificial Intelligence – 社区黑料 32 32 California Students Author New 鈥楧igital Wellness鈥 Bill, Say Phone Bans Fall Short /article/california-students-author-new-digital-wellness-bill-say-phone-bans-fall-short/ Mon, 20 Apr 2026 16:30:00 +0000 /?post_type=article&p=1031340 This article was originally published in

After taking a break from social media, Orange County student Elise Choi helped write a bill that would mandate California schools teach digital wellness 鈥 a response to growing concerns about how technology is affecting students鈥 mental health.

Assembly Bill 2071 would require California schools to include digital wellness in health classes, teaching students how social media and AI affect their mental health and behavior. Supporters say the bill focuses not on limiting access, but on teaching students how to use technology responsibly. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Elise, a junior at the Orange County School of the Arts and a member of the student coalition, GenUp, said a bill that serves students 鈥 not simply alleviates parent anxieties 鈥 has been long overdue. 

鈥淚t鈥檚 powerful to have students at the center of policy change when it comes to education legislation,鈥 Elise said. 鈥淚t鈥檚 important because we are the ultimate stakeholders, and these issues affect us and our future.鈥

The bill follows landmark court verdicts that found social media companies Meta and Google liable for designing 鈥渁ddictive鈥 features and endangering children online. Elise said it also responds to what experts describe as a growing , fueled in part by  about social media use. 

If the bill is passed, the California Department of Education must develop by January 2028 a plan to teach students about topics such as healthy screen habits, algorithms and AI and safe interactions on social media. The proposal passed a committee hearing last week and is expected to pass in the Legislature with bipartisan support. 

State Assemblymember Josh Hoover, R-Folsom, who introduced the bill in the Legislature, said the idea of digital wellness instruction was born out of student pushback against the Phone Free Schools Act, which would require all public school districts to create policies to ban or prohibit mobile phone use starting in July. 

鈥淣ow, students are realizing how much the screen time and the social media use really does impact their well-being,鈥 Hoover said. 鈥淎nd they鈥檙e actually getting excited about making changes and helping their peers actually improve their health as well.鈥

Where cellphone bans fall short

For many digital wellness advocates like Kelly Mendoza, a senior education leader at Media Education Lab who served as an expert consultant on the bill, digital wellness education picks up where California schools鈥 cellphone bans fall short. 

鈥淧hone-free schools can reduce screen time or potentially reduce behavioral issues that can happen at school, but that doesn鈥檛 teach students healthy media use, decision-making and self-regulation,鈥 Mendoza said. 鈥淪tudents are still not offered the opportunity to learn these skills in school in a structured and valuable way.鈥

Mendoza said she regularly sees students who are cyberbullied, experience depression and suicidal thoughts, are unhealthily attached to social media or struggle with loneliness in her work at a phone-free high school. A digital wellness course, she said, would teach students that they have control over their relationship to their phones.

Students would learn practical skills such as adjusting account settings, disabling notifications and managing algorithms to limit harmful or addictive content. They would also work through scenarios such as cyberbullying, body image pressure and misinformation to develop healthier behaviors online.   

Elise said she would like the curriculum to include families, particularly those from low-income and under-resourced communities. She recently attended a digital wellness workshop at a private school in San Diego, where parents and students learned to create a screen time agreement.

鈥淒igital wellness instruction is very inconsistent, and it depends a lot on the resources of the school,鈥 Elise said. 鈥淚 also envision digital wellness to be an equitable subject that hopefully all students can have access to.鈥

Social media can be 鈥榞ood鈥 but 鈥榠nescapable鈥 

Elise said social media also served as an essential 鈥渢ool鈥 for building connections after she switched to a different high school. She met students online who had launched social impact clubs and helped her sister recruit volunteers to teach dance classes for people with disabilities. 

鈥淲e鈥檙e not anti-tech,鈥 Elise said. 鈥淲e鈥檙e for education, and we have to be balanced with technology, because it can be good and also inescapable.鈥

Elise said she met with representatives from Google last week, who she said generally supported 鈥渢he course of safety (for) children and youth online鈥 and expressed support for the bill. 

Hoover, however, emphasized that the bill is not meant to shield social media companies from regulation.  

鈥淲e cannot count on these companies to police themselves when it comes to child safety, so it鈥檚 important that we鈥檙e educating students, but also putting the right rules and regulations in place,鈥 he said.

Hoover has introduced additional bills to regulate children鈥檚 use of social media, including one that would prohibit children under 16 from creating social media accounts 鈥 similar to Australia鈥檚 blanket ban 鈥 and another that would establish an e-safety commission to enforce age compliance. 

鈥淭ech companies have a responsibility to be regulated to make sure that they鈥檙e not entrapping kids into a very addictive technology,鈥 Hoover said.

Mendoza, a parent of a teenager, said her daughter uses social media to share and receive feedback on her art, where she has connected with a community of artists. She said the course could also teach students how to reap the 鈥渞ewards and opportunities鈥 of social media. 

The course would examine 鈥淲hat are the healthy communities that you connect to that are really fostering your growth and your development as a person? And how can you change your algorithm to connect more with those things?鈥 Mendoza said. 

Before she got her first phone, Elise said she spent her time solving Rubik鈥檚 cubes, baking and reading. She said she is now spending time on those hobbies when she gets home from school. 

鈥淭he cellphone ban only gets us halfway 鈥 it doesn鈥檛 change our relationship with our devices,鈥 Elise said. 鈥淲e need to teach kids and give us skills for what happens when we get our phones back at the end of the day.鈥

]]>
Five Things to Know About the New Khan TED Institute /article/five-things-to-know-about-new-khan-ted-institute/ Tue, 14 Apr 2026 13:01:00 +0000 /?post_type=article&p=1031081 Three well-known but very different names in nonprofit education say they鈥檙e coming together Tuesday to launch an improbable enterprise: a new, AI-focused college, designed for a world in which artificial intelligence is reshaping what employers want. It promises a bachelor’s degree in applied AI, delivered almost entirely online in as little as two years 鈥 for less than the price of a used Toyota Corolla. 

Applications are expected to open in 2027 for the Khan TED Institute, a joint project of Khan Academy, TED 鈥 the purveyors of the popular TED Talks 鈥 and the Educational Testing Service.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淚 think there’s always been, frankly, some need for a program like this,鈥 said Khan Academy founder Sal Khan. Many people, he said, can鈥檛 afford a college degree or can鈥檛 take the time out of their work lives to attend four years of classes. 鈥淚t could be that they have pursued a degree, but it’s not giving the signal that would give them the opportunities that they would want.鈥

Another founder, Amit Sevak, who leads ETS, acknowledged that they are still working out many of the details, but that the new institution could someday enroll 鈥渢ens of thousands鈥 of students, rivaling flagship state universities. Sevak said he鈥檚 鈥100%鈥 anticipating that its instructors will be humans, most likely a large network of adjuncts.

鈥淲e still believe in the value of a human teacher,鈥 he said. 鈥淲e think that there’s so much socialization and collaboration that takes place [in the classroom]. There’s also the classic need for classroom management and some pedagogical oversight over the assessments.鈥

Here are five things you need to know about the new enterprise:

1. It鈥檒l offer a bachelor’s degree in applied AI in various fields such as business, marketing, human resources, healthcare and more.听

The college will offer a full undergraduate bachelor’s degree organized around three pillars: core academic knowledge 鈥 math, statistics, economics, computer science, science, history and writing 鈥 applied AI skills and 鈥渄urable鈥 human skills such as communication, leadership, collaboration, peer tutoring and public speaking. 

Early employer partners include Microsoft, Google and , an AI app development site.

2. It鈥檚 expected to be competency-based, cost less than $10,000 and take as little as half the time of a traditional bachelor鈥檚 degree.

The college鈥檚 founding partners say its total cost will likely be under $10,000, a fraction of the of a four-year degree.

Amit Sevak

Rather than requiring four years of seat time, Sevak said, the institute is built around a competency-based model, offering students the opportunity to advance when they demonstrate mastery. That means students could potentially complete the degree in two to three years, he said, depending on how quickly they demonstrate required competencies.

That opens it up to many different kinds of students, he said, including motivated high schoolers who want to earn undergraduate credits quickly before graduation, working adults seeking advancement in their jobs and students already enrolled in traditional colleges who want to stack an AI credential on top of their existing undergraduate credits.

Khan said the new college 鈥渋s something I鈥檝e thought about doing in some way, shape or form, for many years, and the changes within the job market, because of AI, only accelerated that.鈥

He said the idea came out of conversations with TED chairman about a year and a half ago. 鈥淲e started saying, 鈥業t feels like there’s something powerful between Khan Academy and TED. We’re both learning organizations. Khan Academy is known for academic learning from K-through-14. TED is known as [embodying] lifelong learning. And it’s about human connection. And it feels like we both have fairly unique brands in the not-for-profit space and the education space.鈥欌

Khan later spoke at an ETS trustees dinner and got to know Sevak.

鈥淭hey’ve been looking at the same things,鈥 he said, 鈥渁nd they’ve also come up with a framework on durable skills and thinking about ways to assess them. And we realized, 鈥楲ook, the world needs this. And if the three of us come together, this will be very credible and hopefully has a high chance of helping a lot of people.鈥欌

3. It鈥檚 an 鈥淎I-first鈥 institution, weaving artificial intelligence into how courses are designed, taught and assessed.

Sivak said courses will be shaped by AI and teaching will be supported by AI agents, software systems that can tutor students, answer questions and provide feedback. And students will be prepared for work in 鈥淎I-native鈥 environments.

Instruction will likely be 100% online at the college鈥檚 launch, with an emphasis on asynchronous coursework to accommodate students in different time zones and life circumstances. Over time, Sevak said, they鈥檒l likely explore a hybrid format.

4. Khan Academy will provide the college鈥檚 learning platform and pedagogical infrastructure, despite its founder鈥檚 tempered enthusiasm about AI and learning.

TED, the conference organization best known for its short, , will incorporate its content into the curriculum, giving students access to live talks, Q&A sessions and community-based learning with TED speakers.

And ETS, the testing and measurement organization that produces the GRE and TOEFL tests, will contribute its assessment expertise, said Sevak.

Khan Academy, the popular free tutoring website, which has about and operates its own , will offer its technology to deliver the college鈥檚 coursework, organizers said. Khan, who founded it in 2008, will hold the title of 鈥淭ED Vision Steward鈥 in the new partnership.

Sal Khan

The announcement comes just a few days after Khan told Chalkbeat that the learning revolution he predicted in 2023, upon Khanmigo鈥檚 release, .

In September 2022, Khan and Kristen DiCerbo, the organization鈥檚 chief learning officer, were among the first people outside of Open AI to get access to GPT-4, the large language model that at the time powered ChatGPT. Their experiments gave rise to a revolution in Khan鈥檚 thinking: In 2023, he delivered a TED Talk in which he predicted 鈥渢he biggest positive transformation that education has ever seen,鈥 saying we鈥檇 soon be able to give 鈥渆very student on the planet an artificially intelligent but amazing personal tutor.鈥

In 2024, Khan鈥檚 book, , bore the subtitle 鈥淗ow AI Will Revolutionize Education.鈥 

But more than three years after Khanmigo鈥檚 launch, Khan admitted, 鈥淔or a lot of students, it was a non-event. They just didn鈥檛 use it much.鈥

A few students, he said, have used the AI chatbot readily, while others haven鈥檛. AI tutoring, he concluded, doesn鈥檛 necessarily motivate students to learn or fill in knowledge gaps they need to learn more. He鈥檚 still optimistic about AI in education, but also sees its limits. 鈥滻 just view it as part of the solution,鈥 he said. 鈥淚 don鈥檛 view it as the end-all and be-all.鈥

On Monday, Khan told 社区黑料 that AI is 鈥渏ust going to be part of our arsenal to help make more engaging tools. Maybe we鈥檒l be able to give more rich assessment practice. Instead of having multiple-choice questions, you can start to have 鈥榚xplain your thinking鈥 [questions]. So it starts to open up the aperture.鈥

5. It鈥檚 very much a work in progress.

Speaking four days before the launch, Sevak admitted that nearly everything about the venture 鈥渋s still evolving,鈥 and that the team is 鈥渨orkshopping the pedagogical design鈥 of the new college.

Sevak said the institute is in talks with regional and national organizations that can offer 鈥渢he highest form of accreditation,” a step that would set it apart from a growing number of online certificates, micro-credentials and boot camps. 

鈥淲e’re really in the early days, and it’s just going to take some time for us to adapt,鈥 he said. 

The college鈥檚 curriculum isn鈥檛 yet finalized and applications are 12 to 18 months away. Likewise, the specific structure of its hybrid and asynchronous models, its faculty roster and the full range of majors are all still in development.

鈥淥ur intention is, over time, to have a whole range of specializations,鈥 said Sevak. But the program鈥檚 core is designed to prepare students 鈥渢o be really AI-centric鈥 for a new reality. 鈥淲e’re seeing [AI] as ripping through the economy,鈥 creating a lot of uncertainty for young people. 

More to the point, said Khan, 鈥淲ork is changing very fast. AI is changing everything.鈥

]]>
Gen Z Increasingly Skeptical of 鈥斕鼳nd Angry About 鈥斕鼳rtificial Intelligence /article/gen-z-increasingly-skeptical-of-and-angry-about-artificial-intelligence/ Thu, 09 Apr 2026 04:01:00 +0000 /?post_type=article&p=1030884 While some might envision Gen Z welcoming artificial intelligence into their lives, a new Gallup survey finds people between the ages of 14 and 29 are becoming increasingly skeptical of 鈥 and downright mad at 鈥 AI.

Compared to a , they鈥檙e less excited and hopeful about the change it could bring and more angry at its existence, citing concerns about AI鈥檚 impact on their cognitive abilities and professional opportunities.

Respondents said they used AI at nearly the same rate they did before 鈥 they reported only a slight increase in daily and weekly exposure 鈥 but when asked how it makes them feel, the answers revealed growing misgivings. 

Thirty-one percent said it made them angry, up 9 percentage points from 2025. And just 22% said it made them feel excited, down 14 percentage points from last year. Only 18% of respondents said it made them feel hopeful, marking a nine-point drop. Forty-two percent said it made them feel anxious, roughly the same as last year. 

Zach Hrynowski, senior education researcher at Gallup, said the switch was swift. 

鈥淥ne of my working theories is that (it鈥檚) the high schoolers, who are in their senior year, or especially those college students, who are maybe thinking, 鈥楢I is taking my job. I just went to college for four years: I spent all this money and now it’s turning my industry upside down,鈥 he said. 

Only 46% of respondents believed AI would help them learn faster, down from 53% the prior year, Gallup found. Fifty-six percent of respondents said it would help them to expedite their work compared to 66% last year. 

Hrynowski notes, too, that users’ unease wasn鈥檛 entirely tied to the amount of time they spend engaging with AI. 

鈥淵ear over year, among that super user group, they’re much less excited, they are much less hopeful 鈥 and they are more angry,鈥 he said. 鈥淪o this is not a case of some people who are adopting it and loving it and some people who are just avoiding it and feel negatively about it.鈥

Nearly half of respondents said the risk of the technology outweighs the benefits in the workforce. Just 37% believed it would help them find accurate information, down from 43% the prior year and only 31% believed it would help them come up with new ideas compared to 42% in 2025. 

The survey also notes some disparities by age and race. For example, older Gen Zers are more likely than younger ones to voice concerns about AI鈥檚 impact on learning in general. 

Asked how likely is it that AI designed to mainly complete tasks faster will make learning more difficult in the future, 74% of K-12 respondents said it was 鈥渧ery likely鈥 or 鈥渟omewhat likely鈥 compared to 83% of Gen Z adults who said the same. Men and Black respondents were also less concerned about learning impact than their peers overall.

Results are based on a survey of 1,572 people spread throughout every state and Washington, D.C., conducted between Feb. 24 and March 4, 2026. It was commissioned by the Walton Family Foundation and , Global Silicon Valley. Together, Walton Family Foundation and Gallup are conducting ongoing research into Gen Z’s attitudes toward AI.

Hrynowski believes there might be a link between recent revelations about the harmful nature of social media and AI-related distrust: Many of the respondents came of age, he notes, just as former surgeon general Vivek H. Murthy called for a about its use. 

shapes the user experience in social media. Just last month, a California jury found social media company Meta 鈥 owner of Facebook, Instagram, WhatsApp, Messenger and Threads 鈥 and YouTube injured a young woman鈥檚 mental health by design in that could encourage untold others. 

This was the second of two critical decisions: Just a day earlier, a New Mexico jury found Meta 鈥 and hid what it knew about child sexual exploitation on its platforms.

I’ve always been very impressed from the start of this work with Gen Z that across the board, not just with AI, they are keenly aware of the risks of technology, whether it’s social media, whether it’s AI or screen time,鈥 Hrynowski said. 

They are not the only generation to harbor these worries. A growing number of parents of K-12 students are pushing back on their screen time, not just , but  

Despite respondents鈥 skepticism about AI, they鈥檙e also readily aware that the technology won鈥檛 be walked back: 52% acknowledge that they will need to know how to use AI if they go to college or take classes after high school, while 48% think they will need to know how to use AI in the workplace.

An earlier Gallup study, released just last week, shows 42% of bachelor’s degree students have reconsidered their major because of AI.

Gen Z, in its reluctant acceptance of the technology, wants help in how to navigate it, both in an academic setting and in the workplace. Schools are stepping up, the survey revealed: The share of K-12 students who say their school has AI rules moved from 51% in 2025 to 74% this year.听

Disclosure: Walton Family Foundation provides financial support to 社区黑料.

]]>
Behind the Reinvention of Summit Public Schools With AI /article/behind-the-reinvention-of-summit-public-schools-with-ai/ Tue, 07 Apr 2026 14:30:00 +0000 /?post_type=article&p=1030804 Class Disrupted is an education podcast featuring author Michael Horn and Futre鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In the latest episode exploring new school models powered by artificial intelligence, Summit Public Schools鈥 Cady Ching and Dan Effland join Michael Horn and Diane Tavenner to discuss Summit鈥檚 transformation into an AI-native school model. The conversation examines how clarity around school outcomes and model design enables the effective integration of new technology, followed by insights into the evolution of Summit鈥檚 expeditions. Ching and Effland emphasize the importance of a holistic, purposeful education, as well as the need for a robust technology infrastructure to scale innovation.

Listen to the episode below. A full transcript follows.

Cady Ching: I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum, it’s not an LMS, it’s not a schedule by itself, it’s not a set of beliefs or a graduate profile by itself. Those are parts of a model, but a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together. And it is not going to be replacing a school model. It’s going to expose whether or not you actually have a model. And I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning and you’re not sure how they’re showing that or what adults are responsible for, AI just layers on complexity and, quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better.

I think the jury is out on the tools that we need and how we can create the tools that we need. But AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane, it is good to see you with some excitement for today’s episode.

Diane Tavenner: Yeah, we have a real treat today. We’ve got two of my favorite educators in the world joining us for what I’m sure is going to be just a really interesting conversation.

Michael Horn: Well, and for years, as obviously I’ve learned about Summit from you, direct from you, and yet it’s been nearly 3 years, I think, since you passed the baton, if math is still a thing. And I know from afar that the team continues to be among the most innovative schools in the country and so I know that they continue to think about reinvention, and frankly, you know, what does Summit need to look like? How can it get even better? All these questions for its learners. And so I’m incredibly excited to dig in and learn about what they’re calling Summit 3.0 on today’s show. I will say it’s also interesting to have this conversation because we’re sort of in our model geek out, if you will, at the moment, right? While we’re having this conversation, we’ve had the founders of Alpha School, Flourish on, both of which are designed as AI-native models. And for those who listened to those episodes we sort of created a little bit of a side-by-side, if you will, where we said, hey, Summit is here as this baseline for a pre-AI model trying to do personalization or optimization of each kid’s learning. And we explored what can you do in an AI-native world? How can you design differently? But today what’s exciting, I think, is we’re going to get to dig into what does it look like for an existing model with that orientation to become, quote unquote, AI-native.

And as you know, transformation and how organizations reinvent themselves, that’s something I get really passionate about and excited. So I cannot wait to learn from the real-life example in progress.

Diane Tavenner: Well, we’ve got the two perfect people for that conversation, Michael. And so let me introduce you to Cady Ching, who is the CEO of Summit Public Schools, where she was an extraordinary teacher and school and network leader for a decade before taking on that role. So she brings this full spectrum of experience to this next phase. And Dan Effland, who is the senior director of innovation at Summit, where he was also an extraordinary teacher and school leader before taking on this new role of leading for the second time in the history of Summit, the reinvention of the model. And so welcome, Dan and Cady. We’re so happy that you’re here with us and excited to talk to you about the work you’re doing.

Cady Ching: Thank you. Thank you so much. I’m excited too. It’s coming at this moment for Dan and I where we’ve been trying on a lot of language about where we’ve been, where we are today, and where we’re going. So selfishly, this is a milestone for us.

Michael Horn: Well, and I get to feel like I’m jumping in on a team huddle of y’all. Yeah, this will, this will, this will be fun.

Cady Ching: Welcome, Michael.

Michael Horn: Thank you.

What Is a School? 

Diane Tavenner: Dan and Cady, a few weeks ago we got together and you walked me through the thinking and planning you’re doing. And honestly, I was captivated, you know, because I got stuck on it and I wanted to dissect every word. By this simplest definition of school, it’s honestly the simplest definition I’ve ever read of a school. And I wanted to start there today because I really think we always have talked about getting to the simplicity on the other side of complexity. And I think you’ve done it with this definition, and I think it’s going to be really powerful in this next chapter. And so maybe, Dan, kick us off. And if you will share that definition and a little bit about how it came to you or how you all came to it in your process and what you think it unlocks.

Dan Effland: Yeah, happy to. And thanks for having me here. I’m so excited to talk to you all. Yeah, so, I mean, we’ve been working on this for years, right? What is simplicity on the other side of complexity? And I think as we’ve been digging into what does redesigning look like, it became really clear that you have to get down to some foundational elements to avoid designing within conventions and not even really realizing you’re doing it. And so the way we’re thinking about schools is simply, it’s a group of young people. It’s a set of outcomes or competencies. And then it’s a set of resources that help you support young people to achieve those outcomes or competencies. That’s it.

Kids, outcomes, resources. And stripping all the way back to that has allowed us then to engage with our community, because all this work is like with students, caregivers, and educators, and go like, OK, what do we really want? What do schools really need to be? With full freedom, we call them dreaming sessions, where we can really engage off the simplest foundational elements and not get hooked by any of the conventions that have existed, you know, for decades or longer than that in a lot of cases.

Summit 2.0: Evolution and Vision

Michael Horn: It’s really cool because you’ve sort of, like you said, you sort of have a conversation around what those end posts, and we can sort of figure out what’s inside the box to get there apart from what’s always been there. But before we go to that sort of Summit 3.0 vision and where you’re thinking currently is, because I’m imagining you’re going to have lots of trade-offs and changes as you go through the design process, but I think it would be helpful to do a quick turn on Summit 2.0. Both to ground, frankly, our audience, but also to set up a question of how things are changing and where and so forth so that we can understand that. And so I’d love, and maybe Cady, you dive in on this first, how would you describe the Summit 2.0 model, which was not only in your schools, but schools across the country? It’s one of the reasons I think it can be called a model,  it’s scaled beyond Summit itself, right? And as you think about that, the new model, what is it in the Summit 2.0 that you’d say, we really want to hold on to this? Or where are the things that you’re saying, hey, actually, that’s something we can leave behind or start to question whether we want to change that?

Cady Ching: Yeah, thanks for asking this question. I think it’s so important. The reason why I keep smiling when you all say Summit 2.0 and 3.0 is because Dan and I actually got into it a couple weeks ago about if we wanted to use that language or not. And my issue with it was I think it’s really, it serves a purpose because like to Diane’s point, it is simplicity at the other end of complexity. And there is a danger in the simplification of the 2.0 and 3.0 because at Summit, we really think about innovation in two ways. One just being innovation through refinement, which is the day-to-day tightening of the model elements that we’re building on for these larger moments of innovation, which we call innovation for redesign. And so those are sort of the sector-shifting, big model, what we call Big M changes. But I’m going to use Summit 2.0 and 3.0 language today in shorthand.

Michael Horn: Thanks for doing it for the listeners.

Cady Ching: Yeah, and so Summit 2.0 really speaks to our personalization era at Summit, where we showed personalization doesn’t need to be a luxury. And we did that by designing cohesive student and teacher experience., and it included model elements like mentoring and skills assessment and differentiation using real-time data, which we enabled through tech. And the tech that we co-built was called the Summit Learning Platform. For me, what I think was most remarkable about what we proved in Summit 2.0 is what you mentioned. It was scalable, and it did scale, and schools were able to implement and sustain the Summit model on public dollars. Which was remarkable. And so we reached 100,000 students, 6,000 educators, and 400 schools across 40 states.

And we did it with district, charter, private, rural, suburban, and urban. It was completely shifting the field. And then we normalized mastery-based learning, personalized playlists and skills and habits in a way that now is the foundation and the baseline in so many places that we’re now talking about building these AI-native models on top of. And so to the second part of your question, which I’ll kick off and then, Dan, I’m going to pass it to you to add on, we think about model elements and processes that we want to carry forward into Summit 3.0. In the process side, which is where I thrive, we were successful because we were leading from this intersection of the learning science, community engagement, and technology, and we centered teachers and students at every part of the design.. And we’ve used those same design principles to continuously improve our model since Summit 2.0. For me, I feel like we’re 4 years into Summit 3.0, and we’ve already gotten some really exciting data back about situating us as leaders in the field again around what we’ve built on top of the personalization.

In last year, this is our most recent data, we saw that our Summit alumni have some of the highest post-graduation incomes and lowest debt loads, as compared to other top-performing charters. And this is the type of longitudinal outcome evidence we’ve been really longing for. And when you think back about how Dan just defined the system, what that data does for us is it grounds us in that we do have a really strong set of outcomes and competencies that are timeless. Our young people are now achieving them, and we’re letting go of the old technology to create space for AI-reimagined infrastructure that’s going to help us to better allocate resources. And we think our biggest resource levers are people, technology, and time. So that’s really how we’re thinking about Summit 2.0 setting us up for Summit 3.0.

Michael Horn: Dan, did you want to jump in there and add some?

Dan Effland: Yeah, yeah, I think I’ll just like, you know, I think, you know, Cady and I were both teachers in Summit 2.0. We were both school leaders in this, and so we have a lot of really direct connection to it. And the thing that really makes me think about it is like, you know, the learning platform is no longer in existence, but the elements of the model really deeply took root. Mentoring, mastery, what we called habits of success, I think we’re calling durable skills in our world now. Like, I’m fine with it, whatever we want to call it. It’s become ubiquitous. And I think it really helps. I mean, I think it really gives us a sense of a strong foundation of like, we’ve done this before, we’ve built a model that’s scaled and really stuck.

And it doesn’t matter if the technology, you know, is stuck or not, because that technology is not the model. The tech model is these elements of how you support kids to master these outcomes with whatever available resources you have are. And so, yeah, I think there’s a point of pride when we think about, you know, what we’re begrudgingly calling Summit 2.0. And then I think there’s a sense of the strength of the foundation to then build what’s coming next.

Personalization & Durable Skills

Michael Horn: It’s interesting. And we’ll come back to the technology, I know, and we want to circle back to that. But hearing Cady, you described the model, used a few words that I think are really important for people to hear. One of them was cohesive, because I think a lot of the tech efforts right now around personalization in so much of the country are the opposite of cohesive. And that’s why we’re seeing a blowback sometimes against technology, because it’s sort of all over the place and hundreds of things going on at once for a young person with tons of distractions. And you talked about it being grounded in the learning sciences and personalization as a, as a means, not the ends, right? And, and then you have these longitudinal outcomes. And I’m just calling them out because I think people often lose sight of, this is the bedrock, right, of how we build from, and then go from there. And the other piece, and Dan, you just referenced this, the field is now calling it durable skills.

I still prefer habits of success. Let me just be on record on that one. But one of the things you all really did well around Summit 2.0 was have incredible clarity on the mission, what success looks like, such that you could measure in the way you just said, Cady. And I didn’t know those stats. I mean, it’s fascinating., and then you had these commencement-level outcomes, right? You were super clear on what does it look like from a, you know, for a Summit graduate as they go out in the wild. And it seems in some ways those commencement-level outcomes have been precursors to the movement across states that we’ve seen in the Portraits of a Graduate. And I do think that there’s some key differences. I’ll hold my editorial back on what those are more because I want your take on that.

Like, what, if anything, are the differences and, and between those commencement-level outcomes that you all have defined, the portraits of a graduate that we see states doing, and more broadly, like, what’s the importance of being super clear on what those outcomes are and, and how you’d know, on the other side, if you could speak to that. And I don’t know, I’ll make it a grab bag of which one of you wants to jump in on that.

Dan Effland: Dan, take it away. Awesome. Yeah, I mean, so our vision has been the same for 23 years. It’s preparing young people for a fulfilled life, really all people. We think of our staff as part of that too. And fulfilled life is in some ways, again, simple. It is purposeful work, financial independence, strong community, strong relationships, and health. And so that’s given us a holistic picture, a holistic point B that we’re always going for.

You know, I don’t, I don’t know how I compare it to Portrait of a Graduate or Portrait of a Learner. What I know is it gives us a lot of clarity in that you can’t design a coherent model without clarity of where you’re headed. And that it’s also really important that that clarity is holistic and is not simply a set of academic outcomes. It is much broader than that. And that gives us a huge advantage in this work right now because we’re not spending a lot of time. We certainly talk to our community and affirm, you know, on a regular basis, is this still what people want? Is this still what our communities are after? And it is. And so we can move right to like, okay, how do we get there?

Cady Ching: The thing that I would add on top of that is, I loved, Michael, what you called out around the language of a model. I think that at the operator level, and when I’m talking to, to other school leaders, this word is used in a lot of different ways. And I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum. It’s not an LMS. It’s not a schedule by itself. It’s not a set of beliefs or a graduate profile by itself. Those are parts of a model.

But a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together, and it is not going to be replacing a school model, it’s going to expose whether or not you actually have a model. And it’s, I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning, and you’re not sure how they’re showing that, or what adults are responsible for, AI just layers on complexity and quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better. I think the jury is out on the tools that we need and how we can create the tools that we need, um, but AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: I鈥檇 love it if we go back to your simple definition, Dan, that we started with, when we sat down. You use the word package of outcomes, and I was obsessed with that word package for this reason, because you know, maybe I will jump in here a little bit on the portrait of a graduate. 

Michael Horn: The table’s been set for you, Diane. 

Diane Tavenner: Yeah. And one of our, you know, Summit’s longtime beloved board chair, board member, who honestly is one of the most forward-thinking, I think, philanthropists who launched a scholarship for Summit graduates going into Pathways years ago, like ahead of the curve, you know, sent us a note the other day with a real critique of portraits of a graduate. He was sort of reading about them and was just very, you know, like, what are these people thinking? And I think what he was responding to was a lot of the portraits of the graduate, like, feel very checkboxy and compliance-oriented. Versus this sort of holistic. And I know that’s not the way they were intended.

AI Evolution in Education Models

Diane Tavenner: They all have good intentions behind them, but the way they have been sort of brought to life and then communicated and then implemented are what Cady, I think, is speaking to, not as a model, but as like these individual components that don’t have a coherence about how they’re actually organized an organized set of resources to achieve those package of outcomes, if you will. And so I think that what you all just described is at the core of your success going forward and what an advantage you have. And it really speaks honestly to the durability that you’re carrying all of that forward in this next phase, that being, living a life of wellbeing it actually hasn’t changed, right? The elements of that haven’t changed, and that’s what you’re equipping young people for. So, you know, in a recent episode, Michael and I had a conversation, just the two of us, which was super fun, and we were dissecting a way of thinking about school models in three buckets. And I know you are both familiar with this framework, which is essentially that, you know, Model 1 will use AI to make sort of the existing industrial model school more efficient and better. Model 2 will stretch the bounds of that industrial model school with integrated AI. And Model 3 will be AI native, you know, essentially built from the ground up with AI capabilities that are assumed to be at the core. And, you know, as you think about where you’re now going with Summit 3.0, how do you view it in the context of this framework? And, you know, what does AI make possible that wasn’t possible in 2.0 because it was designed pre-AI?

Dan Effland: Love this question. And I did listen to that episode. So I’ll start with the model part, and then I really want to get into what AI makes possible and kind of what it pushes us to do. So I love reading like Learner Studios’ 3 Horizons model. I love Bob Hughes’ paper on the 3 models. I find that stuff really, really important for evaluating what exists and really valuable for visioning and for getting into this place of what really is possible. And I think, and that’s really useful. I will say, when we start designing and working with our young people and working with our caregivers and our educators, I actually find it useful to kind of set those categories aside and to ask the more foundational questions around, like, we know where we want to go, we have this clear vision, we have this really simple, you know, conception of what a school is with kids’ outcomes and resources.

And now let’s go from here. And when you get into, like, as we’ve talked about, we have a lot of clarity about our outcomes already. We really believe deeply that this holistic model of a healthy, thriving, you know, young person, young adult, adult is going to be durable regardless of the transitions that are happening in our society. But when it comes to the resources part, now we have this whole huge different potential, one, AI being a resource, but also a way that I think we’re most really interested when it comes to AI is how we can use it if we integrate it into our tech stack. Really how, like, with a really robust knowledge graph and really strong data layer, you could be dynamically reallocating resources in a way that just would be impossible for people. You know, like when I used to build an annual schedule, like the primary schedule with our Dean of Operations, she and I would sit in an office for a week with a spreadsheet to make a schedule for the year that never changed, right? Like, it’s just so labor-intensive. But now I think when we think about AI as part of our infrastructure, and it’s kind of a layer in our tech stack interacting with a really robust knowledge graph and data layer, we can start to ask ourselves, like, how do we get the right resources to the right kids at the right time for the right outcome? And really get very, very precise, and also do that dynamically. And I think that then allows us to think about personalization, just-in-time instruction, integrating real-world experiences, ensuring that personalized learning still happens in community and there’s deep human connection that is part of personalized learning journey in a way that was, was not possible when, you know, 12 years ago when we were thinking about Summit 2.0, the technology just didn’t exist.

And so, I mean, it’s exciting. I mean, I really think there’s incredible possibility there. And while there’s definitely lots of really cool tools being built, we’re much more focused on the, like, where does this fit as part of our technology infrastructure or our tech stack, because we think that’s, like, potentially a huge lever for transforming learning for young people.

Current Applications of AI in Schools

Michael Horn: It’s fascinating to me, ’cause you just named a number of things that AI could do that I had never thought about in terms of, like, dynamically changing the schedule for, you know, the school and students and, like, there’s some pretty cool things you can start to imagine that ripple out of that. One of the things in that conversation that Diane referenced that she and I agreed to hold ourselves accountable for was to get really specific when we talk to school leaders about, so what’s happening today in your schools that’s actually leveraging AI or is quote, unquote AI native, if you will? And so you all are obviously still in the design phase for 3.0. I use that with trepidation now, but put that aside for a second. Like, today, if I were to, you know, get to be in California again and I was hanging out in your schools, what would I see that’s powered today by something that’s AI native? What is it? What are the tools? What does it look like? What does it do? What are you building versus partnering with? Give, give us a sense of some concrete applications. Anywhere in the tech stack or during the day, that is AI-powered?

Cady Ching: I think this would be a good opportunity to talk about a specific tool that we’re using, which maybe not ironically is Futre as one model example of what it can look like. And Dan can speak to specifically what it’s looking like in the student and teacher experience. But one of the reasons why I start with speaking about a specific tool is because I think that largely edtech has not鈥 has been really unsuccessful in solving for what we need to operationalize innovative school models. And Futre has been a nice shift of pace for us because it is truly a tool that is building for the child versus fitting a child into a tool or larger system. And I think that the way in which we’re using it with our young people can work in many H2 and H3 model contexts because it’s able to give us real-time data about our young people and then allowing us to build their student experience based on the data that we have about them. Dan, can you introduce, Michael a little bit more to Futre and how we’re using it at Summit?

Dan Effland: Yeah, absolutely. So Futre right now we’re using with our juniors and seniors, although we anticipate starting younger, in the coming year. And right now, our juniors are really using it to do a lot of career exploration, which the tool excels at, and really like exploring very deeply different possibilities. And then what those possibilities mean as far as what they need to be working on now or experiences they have between kind of their current point A and their future point B. And then our seniors are using it to get more concrete about what really, what is my next step? What does that mean? What is the thing I’m doing immediately after high school?  鈥 I think we deeply believe this and will proudly say it is best-in-class career-connected learning. It is. Absolutely. It is the thing when we do 鈥 when I do focus groups, when we do alumni data, kind of research, it just comes up over and over again because our young people actually get out in the community or within the school building and really doing what we now are calling real-world experiences. We’ve called them lots of different things over the decades, but we are 鈥 one of the things about that though is that kind of like we were talking about, how do we really curate the journey with this resource allocation stuff? Just tracking all of those different experiences, often there’s 50 or 60 choices for students at one school when we had those expedition cycles. We’re now pulling those experiences onto the Futre platform so we can really start to map what students have been doing, what they haven’t been doing, maybe what they should be doing. And then their mentor can take an even more engaged kind of role in coaching them through that pathway. We’re really excited about that.

We’re kind of just starting, you know, to pull those on. But I think in the future it’s one of the things that we see that the Futre tool will be really, really helpful with because, you know, young people need coaching as they’re figuring out that concrete next step.

Michael Horn: So super interesting. I actually have two questions, but let me go to you, Dan and Cady, first. And then I have a question for you, Diane. I’m going to put you on the hot seat. But I think we’re allowed to do that. But it’s interesting. You just said something there in your answer, Dan, which was then the mentor or coaching.

And so just like to put a fine point on it, The, like, this works really well because you have a model where there is that function that is meeting on a regular weekly basis, right? And like, so therefore that touchpoint, like it’s coherent again to use that word, but I, I would love a quick update on how Expeditions has evolved because when I think when Diane was exiting Summit, like, y’all were in the middle of redesigning it and I’ll be super honest, like even though she and I talk basically weekly, I don’t actually know the new version of Expeditions. And so, I still have a slide in my talk about Summit that says, you know, like every 8 weeks or whatever, you go off for 2 weeks. And y’all should update us on what’s the current state of Expeditions at Summit.

Cady Ching: Yeah, I’ll respond to 2 pieces. One, with the mentoring piece, that model element does exist. One of the reasons why I personally love Futre is because it takes some of the lift of mentors needing to be the vessel of all career pathways off the human. So when we think about that resource allocation of, you know, people, talent, it’s creating a better, more coherent system for the adult as well, which has been so important because we love to center our teachers as well in the design. And then the Expeditions redesign, it’s been really cool. We’ve been, you know, continuously shifting that program based on what our alumni are sharing back with us, based on how the world is shifting. And of course, AI, as so much a part of our students’ experience today and in the future, has shifted it again. It is non-graded鈥 so this is actually surprisingly one of the most controversial things when we rolled it out to parents鈥 they are not receiving grades on the different career exposure pieces that they try out as they’re with us at either the high school levels or as early as 6th grade in Seattle.

And it’s really about ensuring our students get about 9 career exposures between the time they start with us to the moment they leave, because we know it’s really important for them as they develop their identity to see themselves in different career pathways that are all mapping towards high opportunity where they can build their generational wealth for their family. So it’s probably pretty similar in terms of the time allocation. They’re in sort of what we call their core classes for 6 weeks, and then they’re pausing for 2 weeks to go out, usually in the upper grades, off campus. You don’t see 鈥 when people come to observe this on our site, they’re not actually a lot of kids in the building because learning happens without walls. Dan, what else would you add as you’re going? Dan is quite literally on an expedition tour currently. He’s at one of our school sites right now, and right after this recording, he is going to go in and speak to our teachers. So what else would you add?

Dan Effland: Yeah, I mean, I think that’s an important side of it is so that, I mean, one, it’s just, I was still in a school leadership position when we transitioned to this kind of redesigned Expeditions, and I just can’t tell you how powerful the experiences are. I can think of so many stories, so many young people, but like one in particular that a young, he’s 鈥 well, he’s probably not even that young now, but he’s 25, but he was a young, young man at the time who was really, really struggling. And this kid was having discipline issues, attendance issues, struggling, like, not necessarily living at home on a regular basis. And we really, we thought we were gonna really lose this kid. And he started doing an expedition experience related to culinary arts. After he did that first one, he did a second one, and then there was kind of a sequence of them where he had, you know, like the first one was kind of like a survey course. It was the community college. It was about 25 kids.

Finding Passion and Purpose

Dan Effland: Then he was able to do one where he was actually kind of shadowing one of the actual culinary arts program college students and learning in a second wave. So I’m having a hard time not using his name, but I’m going to keep it out. But I just loved this kid. And he found his pathway. And not only did he find his pathway and ended up going to a culinary arts program and graduating and now works, you know, like in the culinary arts, you know, scene in Seattle, his attendance improved, his grades went up, his connections with his mentor, with his teachers, with his peers, which were, you know, fraught, got better and better. And he became a healthier human because purpose and passion and having a pathway is essential for all of us. And we’re at a time when, you know, you can read about this everywhere, there’s studies, our young people are really searching for that clarity about purpose and pathway. And when you see it, I mean, it’s just like Cady said, it’s kind of hard, like it’s not a good thing to tour because the kids are mostly out in the community.

Dan Effland: But when you have the privilege of being a school leader and you see these kids over the years and they do their cycles, you just, the impact is unbelievable. So yeah, I just wanted to, yeah 鈥

Designing Education for the Child

Michael Horn: No, the anecdotes make these things always so much more powerful. And I mean, you can, through your story, hear him building a positive identity of himself, right? And that’s incredible. Diane, something Cady said made me think of it, which is obviously, you know, folks who listen to us know that you’re the entrepreneur behind Futre. I now understand why it was originally called Point B based on Dan’s language and I guess, but she said something interesting, which was like a lot of edtech has not helped the launch of new model design, right? Because it’s been, and that, that’s sort of been obvious to me for why, right? Because the market is schools as they are, and venture capital wants big markets, and right, like, it’s 鈥 so it’s, it’s this sort of reductivist thing that happens. But she said you’ve been designing for the child, and so you’ve been able to escape that and I wondered if you just might want to reflect on that, because I imagine it is still hard though, um, because you’re still like 鈥 schools are the conduit to the kids. So just sort of like, what’s the advice, or what have you learned, right, through, through navigating that?

Diane Tavenner: Well, I think that I mean, so much of what Dan and Cady have just said is so important. And I think that what, what was one key thing is, you know, I sort of set out to build Futre as an edtech partner that did things differently than what I experienced when I was sitting in, you know, the seat that Dan and Cady are in. And you know, that core value of our company is how we do the work is as important as the work that we do. And so how we do the work is very much co-building with schools and leaders and students. And so, you know, we are out in the field working with students and teachers and people like Dan and Cady literally every other week. So we are literally co-designing and code building what happens. And so what you just heard, that Futre is being designed to help young people build this identity over a 10-year journey. I mean, that’s unheard of, I think, in any sort of tech market.

People don’t think about that. We have real outcomes that people are aiming towards, and most tech products just look at what’s something that exists and try to make it more efficient or slightly better. They don’t think about the integration of it, the flexibility of it, how it will be used by the adults. I mean, As an example, they just told you Futre can be used both in individual coaching, mentoring, advising, counseling. It can also be used with groups of students in a classroom, and it’s actually literally designed to support both of those. And I will say the, the inclusion of really supporting real-world experiences came directly from our engagement with our school partners and our students. That emerged as this real need And we were watching people literally running around schools with laptops on their arm and all these spreadsheets and trying to organize. And so we have co-built these elements together.

But you’re right, the incentives in the business side of things are not to build this way. And so, you know, like always, we’re going to see if we can prove that wrong and say, no, when you do build this way, you not only get better outcomes for young people, schools and teachers and educators, but you also can be a successful, scalable product.

Michael Horn: So certainly a more enduring product if you, if you thread that needle, right? So for sure.

Cady Ching: Yeah, exactly. So I think it’s I think it also speaks to why it’s so important for Dan and I to sort of pull together a coalition of the willing with other operators. One thing we haven’t spent 鈥 I know we’re almost at time 鈥 that much time talking about is how hard this work is. It is challenging, and we have so much to learn. We are not perfect. We are learning every single day. We are constantly seeking out other school systems that have similar visions for education, and we’re trying to learn from them. We’re trying to get out onto their campuses and be in community with them because we know that if we want to build something that’s enduring and lasting and maximizing impact on the number of students in our country, or even globally, we have to build for the students of Summit as well as all students.

And I think that, that’s what’s most important for me as I set out to lead some of this work is if it only works at Summit, it’s not good enough. And what we’ve learned about leading change at scale is that we need a shared purpose for what school is actually for, and that belief that it’s possible to build a system for that purpose, which is actually no small feat. And it’s why we’re spending so much time building what I would call a coalition of the willing, which is educators and systems who agree on our common destination before we start building the actual tools. I think my core idea is that beliefs come first, model comes next, and then the tools come last. And when we get that order right, that’s when the scale can become possible.

Summit Learning: Model vs. Technology

Diane Tavenner: Cady, I want to double-click on what you’re saying because, you know, you talked at the top of this about how Summit Learning had really scaled across the country to 40 states and, you know, 100,000 students, etc. But Dan, you also said the technology, the Summit Learning platform was not the model. It is not the model. And the model has really taken root even as that particular piece of technology has gone away. That said, I do know that you both believe deeply that having an aligned core technology that is the infrastructure that sort of I think, Dan, you used the word guardrails, like puts up the guardrails and the support for the model is profound. And I know that you’re in conversation with other folks who’ve done some at learning who are, who it’s taken root for them as well, but are having a hard time really keeping that model intact. And so talk about sort of the need for that infrastructure, the role that it plays and what you think it might look like in 3.0. And Cady, you just said it, no one’s going to build technological infrastructure for a single school or a single school system.

And so there has to be this coalition.

Cady Ching: We have to create the market.

Diane Tavenner: Yeah. And so talk about that because the market generally is not very coherent. And as I sit on the other side, it can be really confusing and hard so talk about how you guys are thinking about that.

Enabling Learning Through AI

Dan Effland: Yeah, I think this is something we’ve started to be spending more and more of our time on as we’ve gotten clearer in the work with our students and caregivers and educators this fall. We’ve gotten clearer about where we’re going. There is this need, which is that technology is not the model, but it is, you know, there’s a reason we talk about time, talent, and technology as the big levers with resources. It is a huge enabler. And I think the possibilities with AI as part of that technology infrastructure make it an even stronger enabler. So I’ve already talked about like the idea of like dynamically reallocating resources, which is, I think, I love in a conversation educators here, because I think sometimes it’s not the, like the shiniest thing to talk about, but we know that getting kids the right thing at the right time in the right sequence is often the difference between learning and not learning, between progress and not progress, and between finding that pathway and not finding it. And so, at a high level, when we’re thinking about that infrastructure, we need to make sure that, like, we have a really rich, you know, amount of data.

And there’s a lot of work to be done there. Our school systems historically have not put data together in ways where you can create what like a technology person would call the data lake in a way where you can really access that as you need it. And then the next element is going to be a really robust knowledge graph that is not just academic standards. It’s got to be much broader than that. And then, of course, the way that AI would then interact with that to allocate and think about your resources. And I’ll share too, like when we think about resources, I generally think of everything as a resource. My time is a resource, Cady’s time is a resource, our educators’ time is a resource, curriculum is a resource, YouTube is a resource. Anything that can help a young person move towards those outcomes, we think of as a resource, and how can we constantly repackage those and get them in the right order while holding onto the vision? Because I think there’s a version of personalized learning that I would call like individualized learning.

That’s not what we’re talking about. I believe this has to happen deeply in community and with really strong relationships and human connection. And so the personalized learning, then it’s actually more complex when you’re committed to maintaining community and relationships, because you’ve got to figure out configurations of young people and not just put everybody separately on a computer they have a particular pathway and so.

Cady Ching: And that’s what we’re seeing, we’re seeing people just run, sprint towards an outcome without doing the diligence. And I think that it’s resulting in a lot of binary. If you’re either tech-forward or you’re human-centered, and there is a way to bring that together and build a model that’s doing both and that’s what we’re setting out to do.

Dan Effland: Yeah. There’s another binary too, that we haven’t talked about, but we should stamp here, which is this binary of like, real-world readiness or academic foundations. And that we now, we have these camps and like, we’re all about academics and we’re all about the real world. And when you talk to students, you talk to students and caregivers and educators, no one thinks it should be an either-or. That’s the scarcity mindset we’re often in, an area that we engage in educators. And we’re deeply committed that our young people will be prepared with college-ready academic foundations and real-world readiness, which means for us habits of success, communication, collaboration, all executive functioning. That is has a purpose

Diane Tavenner: Yeah. One is, as Dan, your story of that student showed, the sense of purpose, which is connected to what my life will look like in the future, really is what drives everything for a young person, right? It’s how they’re forming their identity as they build that vision. It’s what motivates them to stick to the hard work every single day on this journey to get where, where they’re going, and so yeah, I think what you’re up to is really critical. I hope that a lot of schools and systems engage with you to create this demand in the market for this type of infrastructure, dare we say, you know, Summit Learning Platform 3.0 as well. Because I think that it’s really, it’s hard to conceive of a post-AI model that doesn’t have that. That real infrastructure.

And I know you all haven’t seen it or found it yet, but continue to make strides in bringing it to life.

Michael Horn: This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question: what will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving. Learner Studio is sponsoring this season on AI and education because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learner Studio’s mission and the innovators who inspire them at www.learnerstudio.com. 

So a good place maybe, Diane, to wrap up.

Should we pivot to our before we let you off the hook section? Cady, Dan, we have a tradition here where we, where we talk about something we’ve been reading, writing, watching, listening, whatever it is, not writing, listening to, and eventually I’ll get my verbs correct. But and then, so just often we try to keep it outside work, but we often fail. So, Cady, you want to go first, and then Dan, we want to hear what’s been on your playlist or bedside table, and then Diane and I will wrap it up.

Cady Ching: Yeah, sounds great. I have been鈥 I taught my 7-year-old what it means to brain rot. I don’t know if you’ve heard that term, but where you just sit on the couch and just kind of watch nothing for hours and hours. And we did do a Spider-Man and Avengers binge this past weekend. So that is something I have been watching a lot of. Reading is going to be hard for me to separate it from the professional. I’ve just been really deep in leader succession. I think to do this work, you need really strong talent in leadership pipeline.

And so I’ve been in HBR. I check the Marshall Memo every week to see what, what they’re pulling out, to really think about how I’m leading personally, locally, individually, but then also what the sector needs. Dan, I’ll pass it to you.

Dan Effland: Similarly, like the kind of first answer on my mind is just this fire hose of like white papers and podcasts about education and AI.

Cady Ching: And then he screenshots them and sends them to the whole team.

Dan Effland: Yeah, drive everyone nuts with them. But I do have a more, maybe a more fun one on the personal side. Kind of finally reading the Foundation series, the Isaac Asimov kind of classic sci-fi. It’s honestly about connection for me. My siblings are sci-fi readers and I’m very late to the party. And then my father is retired now, and one of his, it seems like, main activities as a retiree is to reread everything Asimov ever wrote multiple times.. And so for Christmas this year, I got a stack of these really great, Half Price Books paperbacks of all the Foundation novels, and I’m starting to work through them.

And we have a text thread about them, and they are, it’s a wonderful story, it’s very complex, and it certainly does also make me think a little bit about the future of our world and AI and, and what, you know, where, where young people fit in that, but it’s also just been a really fun way to connect to the family.

Michael Horn: That’s cool. Wow.

Diane Tavenner: What about you, Diane? Well, picking up on that. So first of all, apparently this is not going to be a novel recommendation because this Apple TV series, I guess, is the most watched at this point. But we watched Pluribus, which was created by Vince Gilligan, who 鈥 yes, Breaking Bad. Yes, Better Call Saul. I didn’t watch either of those, but I was a huge X-Files fan

Michael Horn: Back in the day.

Diane Tavenner: OK. And so there is very much some X-Files feel here in Pluribus. But to what Dan said, and I think Foundation is related, I just find this series to be so provocative in the questions that it’s bringing up and sort of the contemplation of where we’re going as a society and how the choices we’re making each day might affect that and what we actually want. And I will鈥 I told you I would report back my goal. I did finish Ian McEwan’s novel that I pre-promoted. Yeah, yeah, yeah. But it was everything I expected and more.

It was just extraordinary. And I did both of those over the holiday. And I will tell you, I feel like I’m sort of in surround sound right now of asking these big existential questions along with everything from what’s happening in the news on a day-to-day basis to all the work in AI. So, but I would highly recommend it. Super provocative and interesting.

Michael Horn: Perfect

Diane Tavenner: Perfect. Crazy. Like, you never know what’s gonna happen next.

Michael Horn: That’s fun when you can’t predict it coming.

Diane Tavenner: Yeah.

Michael Horn: Yeah. Yeah. I was gonna say, so the brain rot theme that you brought up, Cady, I mean, we talk about it all the time with our 11-year-olds, here at home. But I was 鈥 this is not where I was going to go at all with this, but I 鈥 something one of my kids said made me think of the Animaniacs theme song, if you all remember that cartoon from back in the day, and I pulled it up and showed it, and my wife just dismissively said, this was brain rot when we were growing up. so, there you go. the one I’ll say is, we all went with another family and saw Wonder, at the American Repertory Theater. Many people may know the book, Wonder, which follows the story of Auggie Pullman, a 10-year-old who has Tretcher Collins, syndrome that presents as disfiguration of the face and sort of how going into a school environment for the first time and all the things that it does. And there’s a movie about it as well, but now there is a musical too.

And Diane, you will not be surprised, I was crying from the opening number and I kept it up through the whole thing. So it was, I was true to form. That’s a good one to cry over. It was good. I represented well, but it was fantastic. We’ll see if it makes the jump from sort of off-off-Broadway to something bigger, but until then, if you’re in the Cambridge area, definitely check it out. And for all of you, just huge thanks, Cady, Dan, for joining us, getting us to have a peek under the cover of what’s coming next at Summit and the broader 鈥 as usual, you all are thinking about the broader ecosystem as well, which I admire so much about the work you all do at Summit. It’s not just our model, but how does our model spur this greater change across education.

So huge thanks for joining us. And for all of you listening, keep the questions, comments coming. Diane and I feed off them, and we really appreciate all of you. We’ll see you next time on Class Disrupted.

Disclosure: Diane Tavenner founded Summit Public Schools and served as its CEO from 2003 to 2023.

This episode is sponsored by LearnerStudio.

]]>
Opinion: When It Comes to Developing AI Rules, Who Asked the Students? /article/when-it-comes-to-developing-ai-rules-who-asked-the-students/ Fri, 03 Apr 2026 10:30:00 +0000 /?post_type=article&p=1030620 Three years ago, schools took a side.

Within weeks of ChatGPT鈥檚 release, hard rules appeared almost overnight. AI tools were banned throughout departments. Teachers watched what seemed like an existential threat materialize in real time, and they responded the way institutions usually do under pressure: They drew a line and told everyone not to cross it.

Three years later, that line is still there. And at many places, nobody ever asked whether it should be, at least not the people most affected by it.

When I looked into how my Austin, Texas, high school鈥檚 AI policy was developed, I found that my administrators made the decision internally. There was no student committee, no open forum, no campuswide survey. The rulebook was simply handed down. In K鈥12 education, require districts to develop and publish AI policies; when they are published, they鈥檙e often developed without proper consideration of all stakeholders, including students themselves.

It鈥檚 reasonable to counter that students are minors, that institutions need coherent governance and that not all decisions can go to a committee. But AI policy isn鈥檛 a routine curriculum adjustment. It governs what tools students are allowed to use to think, draft, research and communicate 鈥 tools that increasingly shape how knowledge is produced and evaluated outside school. Getting those rules wrong produces consequences for students.

Brittany Carr鈥檚 situation is a well-known example. In early 2023, the had three assignments flagged by an AI detector. She provided her revision history and explained her process writing deeply personal essays about her cancer diagnosis, her depression and her personal recovery. It wasn鈥檛 enough. Fearing that a second accusation could cost her financial aid, she began running every essay through an AI detector herself, rewriting any sentence it marked until her writing voice felt flattened and unfamiliar. By the end of the semester, she left the university.

Carr is not alone. The same NBC News investigation found that students across the country deliberately simplified their vocabulary and avoided complex sentence patterns 鈥 not to write better, but to write less like themselves. Creative writing assignments exist to help students find their voice, which they can鈥檛 do in fear of an algorithm. Carr鈥檚 case shows a student reshaping her writing, and ultimately her education, around a software system she had no role in approving, in a policy she had no voice in developing.

Student involvement would not necessarily have guaranteed a different outcome in Carr鈥檚 case. But it might have changed the structure that enabled it. Students could have brought up concerns about relying on automated detectors without corroborating evidence. They could have described how fear of false accusations pushes students toward simpler vocabulary, safer syntax and less intellectual risk. They could have asked what procedural protections exist before a software flag becomes an academic charge.

Instead, at many institutions, enforcement architecture was built first. Conversation came later, if at all.

It doesn鈥檛 have to work this way. In Los Altos, California, did more than sit in on policy meetings 鈥 they designed and ran community workshops, facilitated discussions between sixth graders and administrators, and built an AI chatbot to help other districts draft policies. 

A found that students overwhelmingly want to be part of decisions about how AI is used in their education 鈥 and that many already hold sophisticated views on its risks and potential. The fact that Los Altos made national news tells you how rarely that invitation is extended.

But there is a deeper reason students belong in these conversations: We know something policymakers don鈥檛.

At my high school, I鈥檝e witnessed 鈥 and experienced 鈥 a secret loop in the learning process: we use  large language model tools like ChatGPT and Claude to genuinely improve learning by unraveling concepts, studying for tests and brainstorming ideas. 

A few days ago, a student asked a question about a formula in my AP Physics C class 鈥 and nobody knew the answer. Another student opened his laptop and asked Claude, and after a few minutes of back-and-forth, we had completely straightened out our question, improving everyone鈥檚 understanding of how circuits worked. I used an LLM to compile notes from my Multivariable Calculus class, which helped me study and earn a near-perfect score on my test. My friend used ChatGPT to learn Java syntax for a project 鈥 not to write code, but to understand the language.

A found that 54% of U.S. teens now use AI chatbots for schoolwork, with the most common uses being research and brainstorming 鈥 not copying and pasting answers. But that message hasn鈥檛 reached the people writing the rules. This secret loop goes completely disregarded by schools, simply because it鈥檚 easier to blanket-ban the technology altogether. The generation that grew up with these tools understands their texture in a way no outside committee can replicate.

These AI policies directly affect students鈥 outcomes and futures. To exclude them from the conversation is simply undemocratic.

If educational institutions are serious about preparing students for democratic citizenship, that commitment must go beyond coursework and into policy-making. The time to invite students into these critical conversations is now. Will schools treat students as subjects of policy, or as participants in it?

]]>
Opinion: We Don’t Let Babies Play With Electricity 鈥 Why Are We Letting Them Play With AI? /zero2eight/we-dont-let-babies-play-with-electricity-why-are-we-letting-them-play-with-ai/ Mon, 30 Mar 2026 14:30:00 +0000 /?post_type=zero2eight&p=1030476 AI is newly electrifying every corner of our lives, charging ahead faster than most of us can follow. If adults are barely keeping up with tools like Chat GPT and Claude, how are babies and young children supposed to make sense of a stuffed dinosaur that sings them songs or a plush bear that draws them into conversation?

We are developmental cognitive neuroscientists who study how children鈥檚 daily interactions with parents, caregivers, teachers and peers shape , and development. We are not anti-AI, but we are extremely concerned about corporate efforts to market AI toys to parents and educators of young children. We do not yet know how many young children are already engaging with generative AI bots, but if are any indicator, this is a rapidly growing market. 

Some companies say their toys and devices are 鈥渁ge-appropriate鈥 and will support children鈥檚 learning and development, but that鈥檚 not always the case. For instance, the makers of Kumma, a plush teddy bear, promised to build conversational skills for children from ages 3 to 5. But the toy was pulled from the market last year after it was caught encouraging researchers testing it . 

Beyond these physical safety risks, we have essentially no data on how interacting with generative AI 鈥渇riends鈥 will shape very young children鈥檚 foundational brain, socioemotional and language development. Rather, the preponderance of evidence about how brain development works in the earliest years of life suggests that families should proceed with caution before letting their littlest children play with these new technologies in the form of toys.

We are not alone in this concern. Together with scientists around the world who study the exquisite, human-to-human interactions that shape early brain and cognitive development, we recently released an about the risks of direct infant-AI interaction. 

Decades of scientific studies paint a clear picture of optimal development in the first few years of life. Babies and toddlers grow and learn through daily, moment-to-moment interactions with their close caregivers. Indeed, humans cannot develop fully without these foundational interactions. Present, responsive, real-time interactions shape children鈥檚 language, sculpting their growing understanding of new words, grammar, pronunciation and social intentions. 

These real-time interactions shape children emotionally, helping them map their inner experiences to their outer perceptions. There is evidence that when a caregiver and a young child interact, 鈥 from eye contact to to heart rates, oxytocin levels, and even . 

Unlike AI models, which can parrot human-to-human interactions, caregivers pair their words with touch, eye contact and facial expressions that signal their love and attention. Real conversations include inside jokes, local dialects, family lore, and the distinct conversational patterns that make a family a family and a community a community. 

Development is about real-time rhythm, and every unique caregiver-child dyad develops their own. It鈥檚 not about perfection. It鈥檚 about presence, something an AI model can never and will never be able to provide. 

In fact, toys that imitate social responsiveness may interfere with an infant鈥檚 developing sense of how people relate to one another. The better these toys get at mimicking a parent, a child care provider, a grandparent or other adult caregiver, the more concerned we should be, particularly in the earliest years when infants and toddlers are developing a distinction between self and other  鈥 a growing awareness that the other humans who surround them each have inner worlds of their own. 

From a policy perspective, . There is much more to learn about these new technologies before parents let their babies play with them. 

Without these policy protections, parents and educators must take the lead, that simulate social reciprocity, replace face-to-face caregiving, or are designed to replace soothing behaviors that infants and toddlers need from caregivers in order to build attachment, trust and human connection.

The earliest recorded scientific experiments with electricity happened 3,000 years ago. Today, access to electricity has raised the standard of living for nearly the entire world. Still 鈥 after more than a hundred years of widespread use, safety standards and engineering to wield electricity for the common good 鈥 no responsible adult would let a child anywhere near it in raw form. 

AI has the power to improve human lives, but these are early days. We take for granted that we cover our light sockets to protect all our community鈥檚 children. We must take the same protective stance with AI.

]]>
NYC Releases Guidelines for AI in Schools. Some Say it Raises More Questions Than it Answers /article/nyc-releases-guidelines-for-ai-in-schools-some-say-it-raises-more-questions-than-it-answers/ Fri, 27 Mar 2026 14:30:00 +0000 /?post_type=article&p=1030416 This article was originally published in

New York City鈥檚 Education Department unveiled its for artificial intelligence use, offering a rough road map for if and when to incorporate AI tools in school.

The guidance, released Tuesday, arrives nearly three years after a short-lived on ChatGPT. It also comes in the midst of ongoing debates about student privacy, AI鈥檚 effect on student learning and development, and the role of private companies in schools. Some schools had as they awaited citywide guidance.

Hot button issues, like how and if students can use AI for homework assignments, or whether students can use personal AI chatbot accounts in addition to tools approved and supervised by the Education Department, are still being hashed out.

City officials are asking families and educators for feedback, which will inform future versions of the guidance. The Education Department released a and will also host webinars and events to answer questions and gather feedback through May 8.

鈥淎I is here, and our responsibility is to put strong systemwide safeguards in place,鈥 schools Chancellor Kamar Samuels wrote in an email to parents.

The early framework is structured in a 鈥渢raffic light鈥 approach: green light for approved uses, red light for prohibited cases, and yellow light cases for gray areas, which require significant oversight.

For example, brainstorming lesson plans and drafting non-critical communications fall under 鈥済reen light鈥 cases.

In 鈥測ellow light鈥 cases, schools can use AI to find trends in student data, to generate translations for bilingual learners, or adapt materials for students with disabilities 鈥 but a trained professional must first review the outputs before it is used with students.

All decisions made about students, including grading, development of special education and 504 plans, discipline, counseling and crisis intervention, and other academic placement decisions, are strictly forbidden. These 鈥渞ed light鈥 cases are not expected to change in the final playbook the city aims to release in June.

Pushback has already been fierce among parents and education advocacy groups: A asking the city to put a two-year pause on AI use in schools has garnered about 1,500 signatures since October. Several Community Education Councils have also passed resolutions calling for a moratorium of AI in schools.

The guidance was written by the Education Department鈥檚 AI Task Force, and informed by the city鈥檚 external AI Advisory Council, which includes education technology partners from Google, OpenAI, and other companies hoping to contract with the city鈥檚 roughly 800,000 K- 12 students.

Questions remain about student privacy and third-party AI contracts

Before schools can use AI tools in the classroom, each product must go through a data privacy and security vetting process called the Enterprise Request Management Application. The process, created in 2023, applies to all third-party technology vendors.

But AI has become ubiquitous. The Education Department鈥檚 contract with Microsoft 365 programs did not originally include AI chatbots, but now do, said Naveed Hasan, a member of the Education Department鈥檚 Data Privacy Working Group.

鈥淛ust like TikTok was unregulated until school networks blocked it, so are these free AI products,鈥 said Hasan, whose group advised on data privacy policies prior to the AI guidance.

Schools can visit the department鈥檚 to see if a tool has already been approved; otherwise, schools must submit an application for new use.

The process, however, doesn鈥檛 yet include guidelines on how to review certain aspects of AI products, such as algorithmic bias or instructional effectiveness. Those are expected to be included in the final June version of the playbook.

The guidelines, which were shaped by federal and local laws, say personal student information can never be entered into unapproved AI tools, and under no circumstances can student information be used to make money or train AI models.

Although the general sentiment about privacy protection is clear, how to ensure it remains protected in every use is a key question that some close to the policy development say remains unfinished.

Hasan said the guidance alone can鈥檛 guarantee privacy and relying on third-party products, even approved ones, makes it difficult to know what鈥檚 secure and what鈥檚 not.

He has called on the Education Department to consider maintaining its own hardware and training its own group of AI experts instead of relying on outside companies.

AI moratorium advocates push back

The Parent Coalition for Student Privacy, one of the groups on the AI moratorium committee, said in Tuesday that the guidance does not address the potential long-term effects of AI use on learning and thinking.

The city has already accepted that AI will be a part of school learning before proving its value and safety for students, said Kelly Clancy, founder of Parents for AI Caution, another group on the committee.

鈥淭he city needs to have a burden of proof about why this is good,鈥 Clancy said. 鈥淚t shouldn鈥檛 just be about harm reduction, but rather why AI is better for my kids than a human-centered, traditional classroom.鈥

Education Department officials said proposals for new, AI-focused schools and programs 鈥 like Next Generation Technology, an 鈥淎I-focused鈥 high school 鈥 must demonstrate how they align with the guidance鈥檚 principles.

The full preliminary guidance can be accessed .

Chalkbeat is a nonprofit news site covering educational change in public schools.

]]>
AI 鈥楽lop鈥 Is Flooding Children鈥檚 Media. Parents Should Be Alarmed /article/ai-slop-is-flooding-childrens-media-parents-should-be-alarmed/ Tue, 24 Mar 2026 19:30:44 +0000 /?post_type=article&p=1030273
]]>
AI in Student Assessments: Promise, Potential and Risks /article/ai-in-student-assessments-promise-potential-and-risks/ Wed, 18 Mar 2026 16:46:17 +0000 /?post_type=article&p=1030004 Artificial intelligence is rapidly reshaping how student learning can be measured, moving beyond traditional tests toward more dynamic forms of assessment. From students conversing with virtual characters to demonstrate problem-solving and reasoning, to AI tools that analyze collaboration and learning processes in real time, these approaches promise insight into what students know and can do. At the same time, these innovations raise critical questions for educators, researchers, and policymakers: Can AI-powered assessments adapt to individual learners in ways that are both valid and fair? Will they help close opportunity gaps or risk reinforcing existing inequities through bias, access barriers, or opaque algorithms? And as AI systems grow more sophisticated, what guardrails are needed to ensure transparency, trust, and responsible use?

In this one-hour webinar, hosted by AERA and 社区黑料, leading education researchers will explore how AI is being used in assessment today, what evidence we have about its effectiveness and what risks demand careful attention. The conversation will balance promise with caution, highlighting both cutting-edge research and the policy and ethical considerations shaping the future of student assessment.

RSVP to watch, or refresh after the webinar to stream.

Related coverage on 社区黑料: 

]]>
AI 鈥楽lop鈥 Is Flooding Children鈥檚 Media. Parents Should Be Very Alarmed. /zero2eight/ai-slop-is-flooding-childrens-media-parents-should-be-very-alarmed/ Wed, 18 Mar 2026 10:25:00 +0000 /?post_type=zero2eight&p=1029803 This story was co-published with .听

Updated March 27, 2026:听In response to this story, YouTube terminated six channels for violating the platform鈥檚 terms of service and one channel for violating its spam policy.

In a video that has been played almost 50,000 times since it was posted five months ago, two cartoon children sing along as they guide viewers through the experience of riding in a car amid a vividly colored, utopian backdrop. 

At first, the seems harmless. The song is upbeat and informative. The animation aligns with the promised subject. 

Except, hold on a second, did those lyrics just say, 鈥淩ed means stop, and green means right鈥? And why are the characters changing in every frame 鈥 different hairstyles and colors, slightly different outfits for the girl and boy? 

Worst of all, for a video that purports to be 鈥渆ducational,鈥 the visuals are sending precisely the wrong message about riding in a car. 

The video opens with the children riding, without seatbelts, in the front row of a moving vehicle. The next scene shows the girl defying physics, floating alongside a moving car, while the boy is seated in what appears to be the hood of the vehicle as it travels backward down a busy street. The third and fourth scenes show the children walking in the middle of the road with moving cars behind them. 

In a video called 鈥淰room Vroom! Car Ride Song,鈥 the cartoon children sing, 鈥淩ed means stop, and green means right.鈥 (Screenshot from YouTube)

It鈥檚 not hard to imagine how the video could have gotten so many views. 

Maybe a parent needs to complete a task 鈥 fold some laundry, get dinner ready, hop in the shower 鈥 and is searching for an age-appropriate video on YouTube to entertain their toddler during that short time. Perhaps that toddler, increasingly independent and prone to running off, needs a better grasp of road safety. 鈥淰room Vroom! Car Ride Song | Educational Nursery Rhyme for Kids鈥 presents itself as a win-win solution. 

But children鈥檚 media experts say this is AI-generated 鈥渟lop,鈥 and that it has infiltrated the internet, preying on young children and their unsuspecting caregivers. 

鈥淲e鈥檙e at the beginning of a monster problem, and we have to get hold of it quickly,鈥 said Kathy Hirsh-Pasek, a professor of psychology and neuroscience at Temple University and senior fellow at Brookings Institution who studies child development. 

She and other researchers, including Dr. Dana Suskind, a professor of surgery and pediatrics at the University of Chicago, have that AI-derived products for babies and children need to be reined in. 

鈥淭his is not neutral content,鈥 said Suskind, author of the forthcoming book . 鈥淚 think of this as toddler AI misinformation at an industrial scale. It鈥檚 very risky for the developing brain.鈥

It鈥檚 hard to say just how pervasive this type of content is, but it鈥檚 clear the problem is widespread and getting worse. One published by video-editing company Kapwing in November 2025 found that about 21% of YouTube鈥檚 feed consists of low-quality, AI-generated videos. 

, the creator of the 鈥淰room Vroom! Car Ride Song,鈥 has posted more than 10,000 videos since its first release just seven months ago, in August 2025. That鈥檚 an average of about 50 new videos each day. , meanwhile, has published about 3,900 videos to YouTube in its entire 20 years on the platform. 

YouTube creators who publish AI-generated videos are producing content for children at a breathtaking speed, as seen on the time stamps from Jo Jo Funland鈥檚 account. (Screenshot/YouTube)

The cognitive decline associated with the consumption of AI slop 鈥 such as a shortened attention span, decreased focus and mental fog 鈥 is sometimes referred to as 鈥渂rainrot.鈥 But when the audience is children, there鈥檚 not much to rot, Suskind said. Because a child鈥檚 brain is still in its early development, still being built, what you get instead, she said, is 鈥渂rain stunt.鈥

鈥淓very experience is building a million new neural connections,鈥 Suskind said of children who are still in their early years. 鈥淵ou will be unintentionally wiring the brain in incorrect ways.鈥

This is not neutral content. . . I think of this as toddler AI misinformation at an industrial scale. It鈥檚 very risky for the developing brain.

Dr. Dana Suskind, Professor of surgery and pediatrics at the University of Chicago

That comes at a cost. A child may absorb the implicit messages of something like the Vroom Vroom video and end up mimicking the 鈥渄ownright dangerous鈥 behaviors they saw depicted there, said Carla Engelbrecht, who has created digital experiences for children鈥檚 media brands such as Sesame Street, PBS Kids and Highlights for Children and considers herself an AI educator and creator.

Engelbrecht is also when it comes to child-targeted AI slop. She has found countless examples of AI-generated videos that could cause real physical harm.

鈥淭he more content I find,鈥 she said, 鈥渢he more horrified I get.鈥

They include videos of a being chased by a T-Rex; a crawling biting into an apple that appears bloody, swallowing whole grapes (a major) and eating honey (which carries the potentially fatal risk of ); and a eating raw elderberries (which are toxic when uncooked).

In a video called 鈥淒inosaur at the Window,鈥 a T-Rex scares a small child. (Screenshot from YouTube)

But there鈥檚 another category of AI slop in kids鈥 media, she said, with consequences that are more difficult to capture. These videos claim to pertain to learning and development, focusing on topics like literacy and numeracy, but due to the speed with which they are produced and the lack of quality checks, they end up introducing or enforcing the wrong lessons. And sometimes, the errors don鈥檛 come until midway through the content. That means if a parent previews the first few seconds of a video, they may miss the unreliable information that appears later in the clip.

A about vowels includes visuals of consonants. It also depicts letters on screen that don鈥檛 align with the audio overlay. A promising to teach about the 50 U.S. states sings along as butchered state names appear in text at the bottom of the screen 鈥 Ribio Island, Conmecticut, Oklolodia, Louggisslia. A about the seven continents frequently shows a compass with more than four points and indecipherable symbols where the 鈥淣,鈥 鈥淪,鈥 鈥淓鈥 and 鈥淲鈥 should be.

In a video called 鈥50 States Song for Kids,鈥 the voiceover sings, 鈥淎labama warm, Louisiana jazz,鈥 while the subtitles read, 鈥淎laboama warm, Louggisslia jazz.鈥 (Screenshot from YouTube)

These may seem like silly slips from a machine, but for a child, every 鈥渋nput鈥 is part of their learning process, Engelbrecht explained. 鈥淢ixed signals means you are delaying them learning the cause and effect of a thing,鈥 she said. 鈥淚f you learn that red is blue and blue is red, that鈥檚 a delay.鈥

鈥淚f you鈥檙e inconsistent, it takes that much longer to learn,鈥 she added. 鈥淓very delay they have means everything else gets pushed back. That鈥檚 taking their executive function offline to go learn nonsense.鈥

Amid all of this internet muck, the question of responsibility is a tricky one.

鈥淔undamentally, everybody has a responsibility,鈥 Engelbrecht said, including platforms like YouTube; companies that operate large-language models, like OpenAI, Google and Anthropic; the people creating and publishing these poor-quality videos intended to reach kids; and parents. 

YouTube鈥檚 current requires creators to disclose videos that have been generated by or altered with AI when that content 鈥渟eems realistic.鈥 This does not apply to cartoons and 鈥 which seems to be the majority of what鈥檚 reaching children 鈥 because it has long been assumed to be fictional content, Engelbrecht explained. 

The platform does have stricter 鈥溾 for content targeting children than it does for its general viewership, said Boot Bullwinkle, a YouTube spokesperson, in a statement. It also has a 鈥.鈥 (These web pages, however, do not specifically address the use of AI.)

Due to the volume of content on the platform, YouTube does not catch every video that violates its policies. (It did take action against at least seven channels on the platform in response to 社区黑料鈥檚 reporting, including terminating two.) 

鈥淭he trust that parents and families put in YouTube is a responsibility we take very seriously, and we鈥檝e invested deeply in age-appropriate environments that empower parents,鈥 Bullwinkle wrote in the statement. 鈥淵ouTube Kids, for instance, offers industry-leading parental controls and rigorous designed to provide a safer experience for families.鈥

YouTube Kids is a distinct version of the platform with content that has been curated for children from birth to 12. Many families continue to use the main YouTube platform to view children鈥檚 content, though, which means many creators still have an audience and earning opportunities there. None of the AI-generated videos reviewed for this story were found on YouTube Kids, although recent in The New York Times found AI videos had penetrated that space as well.

Sierra Boone, executive producer of Boone Productions, a children鈥檚 media production company that makes original content for children ages 2 to 6, noted that kid-friendly competitors to YouTube, such as by Common Sense Media and , do exist. But they have struggled to break through to families. 

鈥淥vercoming that juggernaut is extremely difficult,鈥 Engelbrecht said of YouTube. 鈥淭here鈥檚 a graveyard full of failed attempts to create a safe YouTube alternative.鈥

Boone suggested that some effective labeling would go a long way, not unlike the 鈥溾 LinkedIn is phasing in, which aim to disclose when media has been created or edited by AI, in part or in whole. 

Engelbrecht thinks labels are a good idea, not least because they would be important for AI literacy, but she also believes they would penalize creators like her who use AI 鈥渢houghtfully鈥 in their work. (She is , among other projects, an AI tool that detects AI slop in children鈥檚 videos on YouTube.)

As for who鈥檚 behind the videos, some of it originates overseas, but plenty is home-grown, created by Americans with access to phones or computers who are just trying to 鈥渕ake a quick buck,鈥 as Boone put it. 

These people are often using AI at every step of the process 鈥 to develop themes and scripts for children鈥檚 videos, to generate the videos, and to automate the process of publishing the content regularly on 鈥, in which the creator is anonymous and has no on-camera presence, Engelbrecht explained.

A little over a year ago, a popular content creator posted a video to YouTube in which she raves about a 鈥渉uge opportunity鈥 that would lead to 鈥渕any millionaires.鈥 The opportunity? AI-generated animated videos that inexperienced users could create with a simple prompt in just minutes. The target audience? Young children. 

That video has been viewed more than 335,000 times. 

鈥淎I in general isn鈥檛 inherently good or bad, but it exposes people鈥檚 intentions,鈥 said Boone, whose production studio is responsible for . 

The flood of AI-generated content, she added, reveals how many people have 鈥渘o regard for children or how they鈥檙e impacted,鈥 as long as it benefits them. 

In a video called 鈥淟earn ABCs at Breakfast,鈥 a small baby eats a fistful of whole grapes, which are a major choking hazard for infants. (Screenshot from YouTube)

For Boone, who works painstakingly with her team on every episode of The Naptime Show 鈥 researching, writing the script, editing the script, placing props, doing table reads, going to set, filming, editing the video, publishing and promoting the final product 鈥 creating children鈥檚 media is an 鈥渉onor鈥 that should be taken seriously. 

鈥淭he very foundation of creating children鈥檚 media is you are creating something that a child, in their core developmental years, is going to be consuming,鈥 Boone said. 鈥淪o what is the level of intention that you鈥檙e bringing to that? I think we need to be holding the people who are uploading this content more accountable.鈥

Ultimately, though, in the absence of more regulation or content moderation, the burden falls on parents. 

Parents are likely putting YouTube videos in front of their children in the first place because 鈥渢hey are already so stretched,鈥 said Suskind, who still sees patients in her pediatric practice and interacts with families often. So it鈥檚 inherently challenging to ask them to more closely monitor the content that is coming through their children鈥檚 screens. 

Yet that is what must be done, Hirsh-Pasek said. Until a better solution emerges, the onus is on parents to separate the slop from 鈥渢he good stuff.鈥

鈥淲e owe it to our kids to protect them,鈥 said Hirsh-Pasek. 鈥淭hat鈥檚 what they look to parents for, to keep them in safe spaces. If we don鈥檛 deal with that or do anything about that, we鈥檝e absconded [from] our responsibility.鈥

]]>
Opinion: Precision Learning Has the Potential to Do What Personalized Learning Could Not /article/precision-learning-has-the-potential-to-do-what-personalized-learning-could-not/ Tue, 10 Mar 2026 10:30:00 +0000 /?post_type=article&p=1029582 Driving past Fred Hutchinson Cancer Research Center in Seattle, I noticed a billboard that reads something like, “We treat your cancer like it’s YOUR cancer.” The message is more than a slogan. It captures a growing conviction that generic approaches are no match for serious threats to human health.

What distinguishes places like Fred Hutch is not just advanced science, but disciplined systems: shared clinical protocols, team-based decision-making and constant feedback between research and practice. These are the hallmarks of precision medicine, fueled by advanced diagnostics, data and generative artificial intelligence, and they are delivering transformative results in treating diabetes, heart disease and cancer. AI-assisted screenings are catching aggressive cancers earlier, as new models can analyze previously unexplained genetic mutations to forecast health risks.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Of course, education is not medicine. Learning is not governed by biology alone, student outcomes are harder to define and schools have nothing like the professional norms or accountability structures of clinical care. But that is the point: Precision medicine began with a refusal to accept broad variations in care when better evidence and tools were available.

AI is already in classrooms across the country, but mostly to help teachers save time or give extra support to children with disabilities or language barriers. What if all students could attend schools that said, “We treat learning like it’s your learning,” offering precision education: a supportive environment harnessing human expertise and technology to deliver truly customized solutions for every child?

That reality is closer than we think. AI gives educators the potential to understand, diagnose and respond to students’ learning needs with a specificity that was previously impractical at scale. It can rapidly surface a child’s learning gaps and strengths in math and recommend targeted interventions. But that information alone does little; AI’s power lies in being embedded in professional workflows, guiding adults toward specific, evidence-based actions and tracking whether those measures improve learning over time. To effect genuine change, AI must be accompanied by a reevaluation of the systems that contain it.

AI could improve instructional quality, for example, but only alongside a broader rethinking of the teacher’s role. Rather than incrementally improving one educator’s ability to reach every student, AI could serve a team of instructional professionals, each with specialized expertise. AI tutoring tools can help students fill learning gaps, but even the best have low persistence rates. Their effectiveness depends on student motivation or intensive adult oversight. Without structural change, AI risks exacerbating achievement gaps rather than closing them.

And perhaps most importantly: Far too many students simply don’t like school. They find it boring and irrelevant, struggle with mental health and lack strong adult mentors. Families, educators and policymakers are calling for something more joyful. All of this points to a fundamental design choice: whether AI will reinforce the existing classroom model or become the backbone of a genuinely different support system for young people.

Personalized learning was intended to accomplish this. But despite its popularity, it too often amounts to little more than self-paced software or playlists of digital content, mired in low expectations and disconnected from evidence-based teaching. CRPE’s of personalized learning schools show how easily these efforts become convoluted, mushy and unmoored from rigor. 

Precision learning is fundamentally different. It would enable educators to use technology, data and evidence to identify exactly where a student is struggling, which interventions are most likely to work and how to deliver them effectively and equitably. This is a commitment to evidence over intuition, to shared professional standards over individual preference, to accountability for results rather than good intentions. Personalization asks educators to adapt and give students more choices. Precision demands that state, district and school leaders change how decisions are made, implemented and evaluated. 

Rather than ed tech and personalized learning initiatives that fail because they aren’t grounded in evidence and continuous improvement, education needs an accountability infrastructure that looks more like medicine’s standard of care: a shared professional and ethical baseline for which treatments must be offered. In medicine, deviating from those standards can mean malpractice. Education has no comparable expectation, and introducing one would be uncomfortable. It would force hard conversations about professional autonomy, preparation and responsibility when students fail to learn. But avoiding those discussions carries costs: persistent inequity, uneven instructional quality and the normalization of low achievement.

The effort must start with defining what precision learning means and holding educators and developers accountable for its implementation. Ed tech developers should embed decades of learning science into their designs, just as medical software embeds clinical guidelines. Schools of education should lead the field in conducting and disseminating state-of-the-art research and training educators to use it, much as medical schools run clinical trials and keep practitioners current. And just as the federal government once seeded the Human Genome Project, a reimagined Institute for Education Sciences could lead a national effort to map the “learning genome” 鈥 a shared, continuously updated knowledge base of what works, for whom and under what conditions.

States have a unique role in creating the conditions for precision learning at scale. Specifically, they can:

Build precision learning consortia that bring together educators, researchers and ed tech companies to develop and test solutions and share results publicly, These consortia should make targeted investments in organizations with a proven track record of designing and implementing these approaches.

Align incentives and accountability systems so precision learning becomes a professional expectation, not an option. Just as medical boards define best practices for care, states could convene researchers, practitioners and technologists to establish precision learning protocols, perhaps starting with reading and math, where the evidence base is strongest.

Rethink the role of the teacher. In a precision learning model, “the teacher” would no longer be a single role expected to diagnose, design, deliver, remediate, counsel and motivate simultaneously. Schools would instead deploy differentiated teams, with some adults specializing in diagnostics and data interpretation and others in instruction, mentorship or intervention, all supported by AI systems that surface evidence and guide decisions. This is more a labor redesign than a technological shift, requiring that states fundamentally rethink the role of the teacher, including certification requirements and salary schedules. Precision learning would replace the one-teacher-does-it-all model with specialized teams, backed by AI that surfaces insights and supports better decisions. 

Ensure all schools have the resources, devices and staff training needed for participation in precision learning. The greatest risk of AI-driven precision learning is that it deepens divides if access is limited to affluent schools. In medicine, precision treatments began as elite offerings before standards and insurance systems made them broadly available. Education must skip that inequitable phase entirely.

If a patient were dying and a proven treatment existed, it would be unthinkable for a doctor to withhold it. Yet in classrooms, students fall further behind every day, even when research-based solutions exist to help them succeed.

In medicine, good intentions are not enough. They must be paired with evidence, standards and accountability. Education deserves the same seriousness, because the stakes are just as high. Precision learning is not about replacing teachers or chasing the next shiny technology. It is about building the professional, moral and structural capacity to deliver what we already know works for every student.

We have much of the science. We have the technology. What we need is the will, and the infrastructure, to bring them together.

AI can’t fix education on its own. But it can provide the precision educators have always needed and never had. If we get this right, we’ll look back on this era as the moment we began treating learning like what it truly is: a vital, individual and human process worthy of the same precision, urgency and care that doctors bring to saving lives.

]]>
SXSW EDU Cheat Sheet: 26 Sessions for 2026 /article/sxsw-edu-cheat-sheet-26-sessions-for-2026/ Thu, 05 Mar 2026 11:30:00 +0000 /?post_type=article&p=1029429 South by Southwest EDU returns to Austin, Texas, running March 9鈥12. As always, it’ll offer a huge number of panels, discussions, film screenings, musical performances and workshops exploring education, innovation and the future of schooling.

Keynote speakers this year include Monica J. Sutton, creator and host of the children’s education series Circle Time with Ms. Monica, Yale psychology professor and Happiness Lab podcast host Dr. Laurie Santos, appearing alongside Common Sense Media’s Bruce Reed, and bestselling author Jennifer B. Wallace, whose work centers on the human need to feel valued 鈥 and to add value. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Also featured: former Presidential Science Advisor Arati Prabhakar, who will join a panel on 鈥渕oonshot鈥 thinking and the future of AI-driven learning. And a new documentary traces the career of longtime Sesame Street star Sonia Manzano.

Artificial intelligence this year plays a bigger role than ever. Dozens of sessions examine AI’s expanding role in classrooms, from adaptive tutoring and authentic assessment to teacher burnout, algorithmic bias and what it means to be literate in an age when machines can write, reason and create.

This year, the Austin Convention Center, which typically hosts the event, is under construction. So sessions will be held at four venues around downtown Austin. Organizers are also planning a 鈥淪XSW EDU Clubhouse鈥 at the historic , which will host daily performances, keynote livestreams and social events each night.

Because of the event鈥檚 multiple venues, space may be limited, so organizers recommend booking reservations for keynotes, featured sessions and workshops. They鈥檝e provided an with details. 

To help guide attendees, we鈥檝e scoured the 2026 to highlight 26 of the most significant presenters, topics and panels:

Monday, March 9:听

9 a.m. 鈥 : Researchers, district leaders and family engagement specialists examine the chronic absenteeism epidemic that has left millions of American students disconnected from school since the COVID pandemic. This panel presents the latest data on what is actually driving absenteeism 鈥 from housing instability and health crises to school climate and whether students feel they matter. It鈥檒l explore which interventions are producing genuine, sustained improvement.

11 a.m. 鈥 : This panel presents evidence that score inflation on standardized tests, state-level proficiency standards and the federal retreat from accountability are making it harder than ever for families to get an accurate picture of their child’s true academic standing 鈥 and what policymakers can do about it.

1:30 p.m. 鈥 : This Opening Keynote features Monica J. Sutton, educator, entrepreneur and creator of Circle Time with Ms. Monica, who traces her journey from preschool classroom to digital learning spaces reaching millions of families worldwide. Sutton challenges educators to evaluate every innovation through a developmental lens, asking: Does this technology honor how young children learn, grow and thrive, while protecting curiosity and connection?

2 p.m. 鈥 : What do real students think about AI? How do they want to learn about it? This session, by MIT Media Lab鈥檚 Jaleesa Trapp and LEGO Education鈥檚 Jenny Nash, explores strategies for building AI literacy through hands-on computer science that fosters critical thinking and ensures safe, responsible AI use.

2 p.m. 鈥 : Civics teachers, researchers and policy advocates will examine how teachers are navigating the nearly impossible task of teaching democracy, elections and civic participation in classrooms where students and families often hold deeply opposed political views. The panel shares new findings from America鈥檚 Promise Alliance鈥檚 State of Young People research and explores strategies for creating classrooms where hard but evidence-based conversations happen productively 鈥 and where students develop the civic skills needed to participate in and repair a fractured democratic system.

4 p.m. 鈥 : Child development experts offer a science-backed framework for evaluating AI for young learners without compromising the play, exploration and human attachment that are foundational to healthy development. This session offers an 鈥渦rgent exploration鈥 of AI’s impact on brain architecture and what educators, parents and policymakers must know to protect young minds.

4 p.m. 鈥 : A panel of educators explores the causes of low student engagement, absenteeism and cheating, sharing classroom-tested solutions for creating assignments that are cheat-resistant by design. Rather than relying on cheat-detection software and pedagogy that punishes students for cheating, panelists will share how to foster a culture of academic integrity based on student agency, purpose and ownership of learning.

4 p.m. 鈥 : In this featured panel, Rep. Jim McGovern (D-Mass.), Chef Ann Foundation CEO Mara Fleishman, University of Pennsylvania student Maya Miller and Duke World Food Policy Center Director Norbert Wilson make an evidence-based case that school nutrition is an educational issue, not merely a logistical one. Panelists connect chronic hunger and poor nutrition directly to cognitive function, attendance, behavior and academic performance, and present district-level models that have transformed school meals into assets for learning.

Tuesday, March 10:

9 a.m. 鈥 : This featured session stars Roya Mahboob, CEO of the Digital Citizen Fund, who will draw on her experience growing up in Afghanistan to trace how exclusion compounds across the pipeline from K鈥12 classrooms to corporate boardrooms. Mahboob offers evidence-based interventions that have demonstrated real impact on girls’ participation and persistence in tech, as well as a vision for education that is inclusive, practical and full of possibility.

9 a.m. 鈥 : A candid discussion on the science, ethical considerations and implementation challenges of using Voice AI for assessment in K鈥12 classrooms. Learn what鈥檚 promising, what鈥檚 problematic and what鈥檚 on the horizon as experts explore how Voice AI differs from other AI tools such as large language models (LLMs), and how it can be integrated in ways that truly support students and educators.

12:30 p.m. 鈥 : In this keynote, Bruce Reed, Head of AI at Common Sense Media, and Dr. Laurie Santos, Yale psychology professor and host of The Happiness Lab podcast, examine how rapidly evolving AI technologies and social media are shaping young people’s mental health 鈥 and how families, educators and policymakers can respond. They explore the science of well-being, the risks of algorithm-driven systems and common-sense guardrails to protect young minds. 

2 p.m. 鈥 : This panel challenges the deficit framing that has long defined how schools, families and students themselves understand dyslexia. In an interactive session, a think tank-style panel will present a strength-based model of dyslexia support and examine how AI tools are beginning to unlock academic access for students whose abilities have been systematically undervalued.

3 p.m. 鈥 : Director Anna Toomey’s feature documentary tells the story of five mothers determined to establish the first public school in New York City for children with dyslexia. Toomey follows their battle to open the South Bronx Literacy Academy, addressing a learning disability that affects about 20% of the public. A post-screening discussion connects the film’s themes to national debates about reading instruction and equitable access.

4 p.m. 鈥 : As chronic absenteeism reaches historic highs, schools are doubling down on academics, interventions and incentives. But they may be missing underlying emotional and psychological factors driving absenteeism: stress, anxiety and lack of belonging. This session looks at how rest, youth voice/choice and emotionally safe environments can re-engage students.

5:30 p.m. 鈥 : Director Ernie Bustamante’s feature-length documentary offers a portrait of Sonia Manzano, the trailblazing actress who played Maria on Sesame Street for 44 years. A conversation with Manzano herself follows the screening, exploring how public media can reach children when formal schooling often fails, and what Sesame Street鈥檚 legacy means in the age of AI-generated children’s content.

Wednesday, March 11:听

10 a.m. 鈥 : This performance offers an early look at a show in development that began as a teacher performance at a school meeting. In this Hamilton-meets-The Sound of Music-meets-Good Night and Good Luck story, set against today’s culture wars, three high school students and their teachers navigate questions of identity, purpose and what school can and cannot teach. A Q&A with Peter Nilsson, the show’s creator, follows the performance.

11 a.m. 鈥 : This solo session by Toby Fischer, an Ohio educator, offers a sweeping reimagination of literacy for the 21st century, arguing that reading and writing instruction must now encompass the ability to critically evaluate AI-generated text, recognize the hallmarks of synthetic content, prompt AI systems effectively and to understand the social and ethical contexts in which AI-generated language circulates.

12:30 p.m. 鈥 : This keynote by Adeel Khan, Founder & CEO of MagicSchool AI, makes the case that teacher expertise, relationships and professional judgment must guide technological change. Drawing on his experience building the popular platform, Khan will share unfiltered insights on what’s working and what’s not, offering a framework for evaluating AI tools through the lens of educator agency.  

2 p.m. 鈥 : This panel examines why so many school AI initiatives rely on tools that 鈥渏ust aren鈥檛 there yet.鈥 Panelists share case studies of implementations that stumbled, the lessons of those failures and the educator-driven, grassroots efforts that can move schools from dabbling with AI tools to using them for real instructional transformation. 

Thursday, March 12:

10 a.m. 鈥 : This featured panel convenes former Presidential Science Advisor Arati Prabhakar, Renaissance Philanthropy President Kumar Garg, Carnegie Learning VP of R&D Jamie Sterling and Bezos Family Foundation Chief of Staff Eden Xenakis to explore how bold learning goals can accelerate AI-driven innovation in education. They鈥檒l examine how 鈥渕oonshot-centered鈥 models can rally diverse innovators around a shared outcome and catalyze the funding needed to scale breakthroughs.

10 a.m. 鈥 : Dubbed the 鈥渢oolbelt generation,鈥 more than half of Gen Z respondents in a recent survey said they鈥檙e considering a skilled trade career. And schools are working to modernize career preparation, including by tapping immersive technology to expose students to in-demand skilled trades. This panel, moderated by The74鈥檚 Greg Toppo, will discuss how we can harness tech to engage students in learning while preparing them to successfully meet workforce demands.

11:30 a.m. 鈥 : This session offers a ground-level counternarrative to AI anxiety, presenting a community college and workforce development partnership in Cleveland that is using AI-powered tools and training to open new economic pathways for adults who were left behind by earlier rounds of technological change. Speakers will examine what equitable AI adoption looks like in a post-industrial city and what conditions made the initiative work.

11:30 a.m. 鈥 : Leaders from higher education, industry and workforce policy examine whether universities are structured to produce graduates who can thrive in a labor market being remade by AI. The panel will ask which degrees and credential pathways are producing AI-ready graduates, where institutions are falling behind, and what structural changes will move the needle most.

11:30 a.m. 鈥 : Directed by Scott Barnett, this feature-length documentary follows bestselling author James Patterson to the front lines of America’s reading crisis to examine how the Science of Reading 鈥 a vast body of evidence-based research 鈥 is changing how children are taught to read. A post-screening discussion with literacy researchers and classroom teachers will examine what the film gets right and what systemic change will actually require.

2 p.m. 鈥 : This workshop, conducted by two top officials with the Illinois-based Education Research and Development Institute, will offer practical AI tools that automate routine tasks, generate content, analyze data and simplify communication, freeing teachers to focus on students and strategy and reducing the risk of burnout.

2:30 p.m. 鈥 : This featured panel, with Martin McKay of Everway, Hello Sunshine CEO Maureen Polo and the Brookings Institution’s Rebecca Winthrop, draws on a landmark report spanning 50 countries to explore what it means to protect children’s cognitive, social and emotional development in an AI-saturated world. Speakers will move beyond the question of whether AI should be used in schools to ask how it can be designed to strengthen young people’s capacity to think, relate and thrive.

]]>
Two New Reports Urge 鈥楬uman-Centered鈥 School AI Adoption /article/two-new-reports-urge-human-centered-school-ai-adoption/ Tue, 03 Mar 2026 11:30:00 +0000 /?post_type=article&p=1029371 Two new reports caution that if schools make missteps implementing AI, the results could haunt them for years, locking them into a future largely written by big tech instead of those closest to kids.

The reports, both the results of small, intensive gatherings of educators, policymakers, researchers, tech officials and students last year, share a common warning: AI in schools must serve human-centered learning that doesn鈥檛 simply push for more efficiency. To do anything else risks creating a generation of young people ill-equipped for the future.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


The findings come as young people say they鈥檙e turning to generative AI more than ever: A Pew Research Center survey released last week found that more than half of teens ages 13 to 17 use chatbots to search for information or get help with schoolwork. About four in ten report using AI to summarize articles, books or videos or create or edit images or videos. And about one-in-five say they use chatbots to get news.

For the first report, a group of 18 people met in July in Phoenix. Brought together by , a training and policy organization, and , a digital curriculum company, the treats the question of how schools should view AI as a literal 鈥淐hoose-Your-Own-Adventure鈥 story: The authors lay out three possible scenarios in which educators in an imaginary school district make radically different decisions about the technology.

In the first scenario, the district retreats from AI altogether after a data breach, abandoning a previously created 鈥淚nnovation Lab,鈥 while teachers return to traditional instruction and testing.

The restrictions soon backfire. Students continue using AI at home, but without guidance, take shortcuts on homework, developing a kind of survival mechanism they privately call 鈥渟chool brain.鈥 Seeing how irrelevant most lessons are, they do just enough to get by, offloading thinking to AI tools. When tested, they show shallow understanding and poor foundational skills.

Test scores plummet, college acceptances drop and 40% of graduates land on academic probation. Employers report that graduates can neither work independently nor collaborate effectively with AI. Teachers begin departing in waves.

Retreating from AI, the authors find, creates 鈥渢he worst of both worlds鈥 鈥 students who can neither think independently nor use AI effectively.

In the second scenario, the district, facing competition from AI-driven private schools, goes all-in, adopting a comprehensive, district-wide AI platform for automated instruction. The platform promises greater efficiency via AI tutors, automated grading and behavioral monitoring. And while it initially lowers costs and produces higher test scores, teachers find that students are soon gaming the algorithms rather than learning. The auto-grader penalizes valid but unconventional answers, while multilingual learners are unfairly penalized for non-standard answers on tests.

Teachers find themselves defending grades they didn’t assign and can’t fully explain, while families that challenge grades are stopped by “proprietary algorithms” that even administrators can鈥檛 review. The system delivers 鈥渁 black box鈥 that removes human judgment: 鈥淪tudents could feel the difference between being evaluated by an algorithm and being understood by a teacher.鈥

Before long, graduates struggle with collaboration, creativity and adaptability 鈥 skills employers and colleges increasingly value.

In the report鈥檚 third choice, the district, via its Innovation Lab, redesigns its offerings to prepare students for an AI-driven future while keeping a focus on 鈥渉uman-centered鈥 education. Rather than focusing solely on technology, it develops a 鈥済raduate profile鈥 that emphasizes critical thinking, ethical reasoning and human-AI collaboration, among other indicators.

The lab shifts to flexible, project-based learning, and students soon learn to use AI as a tool that supports but doesn鈥檛 replace their thinking. While the district continues to satisfy state accountability through testing, it also pursues federal innovation grants to fund portfolio-based assessment systems based on the graduate profile.

All is not rosy, though. The redesign is expensive and hard on teachers. Enrollment suffers as political resistance builds steam. But graduates soon demonstrate an ability to critically evaluate AI tools, adapt quickly to workplace changes and develop a 鈥渓earn how to learn鈥 mindset that serves them in the long term. 

Alumni soon report that their 鈥渞obust鈥 portfolios of work are a huge advantage in competitive job markets, and employers say they are the only new hires who critically evaluate AI鈥檚 recommendations, spotting hallucinations and biases.

Amanda Bickerstaff, AI for Education鈥檚 co-founder and CEO, said the first two scenarios are what educators at the July convening said they were seeing most often in schools.

鈥淭here was a strong recognition from everyone, including the students, the two high schoolers, that the traditional methods have not worked 鈥 for decades,鈥 she said. 鈥淏ut it feels safer.鈥

As for going 鈥渁ll in鈥 on AI, she said, that point of view is inevitable in many places, given current aggressive efforts of tech giants like Google who are 鈥減ushing into schools,鈥 going direct to students.

鈥淭here’s this real pressure from both ed tech and AI itself, because it’s such a big market that’s never really been figured out,鈥 she said.

Amanda Bickerstaff

What makes it worse is that few tech firms employ enough teachers to ensure that their products work well for students. 鈥淭hey don’t have hundreds of education people,鈥 Bickerstaff said. Their education teams are 鈥渇ractions of their headcount, working on tools that are instantly in students鈥 hands.鈥

The third path, in which the district redesigns its offerings, is 鈥渢he most human鈥 of the three, she said, and the most intentional. 鈥淭he third path is the one that trusts humans and educators and students and families,鈥 Bickerstaff said.

鈥楨xplicitly ambidextrous鈥 schooling

by the , a think tank at Arizona State University, also calls for a new approach to schools鈥 decisions about AI, saying the technology 鈥渟hould be a catalyst for human-centered learning, not a replacement.鈥

The CRPE report, the result of another gathering in November, asserts that schools are at a pivotal moment. Their AI policies could go one of two ways: They can either entrench outdated educational models or help bring about a fundamental transformation of schooling.

鈥淥ne of the big things that came out of those discussions was a strong feeling among the group that AI is currently being thought of as a productivity tool for the education system that we have, rather than a tool to radically improve teaching and learning and outcomes for kids,鈥 said Robin Lake, CRPE鈥檚 executive director.

During its meeting, the group repeatedly discussed an 鈥渆fficiency paradox鈥 that could make schools faster and cheaper without addressing students鈥 actual needs. To protect against it, they call for a more coherent, human-centered approach that is 鈥渆xplicitly ambidextrous,鈥 improving current practices while intentionally building toward new learning models.

The problem with AI, the report alleges, is that it could simply improve the efficiency of outdated educational models. It notes that the , a time-saving testing technology, for decades reinforced low-level standardized assessments, often at the expense of improved learning.

Instead of using AI as a new kind of Scantron, it says, AI could make way for several innovations, including new assessments that capture real-time performance as students work. It could even measure key non-academic indicators such as belonging, confidence, curiosity and relationship quality.

Robin Lake

Lake said the report鈥檚 idea of an 鈥渁mbidextrous鈥 approach to AI came from an acknowledgement by the group that 鈥渨e have to attend to the kids who are in our schools right now 鈥 and the teachers,鈥 she said. 鈥淲e have to use whatever technologies are available to make things better, but we also have to make investments in big, really different whole-school designs.鈥

Those could include not just better assessments but ways to help teachers provide 鈥渞igorous personalization grounded in the science of learning.鈥

Districts could create classrooms with multiple adults working in teams based on their expertise. And AI could enable schools to match students to internships and other experiences, handling administrative tasks so humans can focus on relationships.

Lake said the group that met in November kept coming back to one idea: Keeping an eye on both the future of school and the reality of the schools we already have.

鈥淎 lot of times when we have these conversations about AI and the future of schooling, it feels very floaty and abstract,鈥 she said. 鈥淪o I really appreciated that the fellows had a vision to connect the here-and-now to what kids need to know and [should] be able to do in the future. That feels really important for us all right now.鈥

]]>
Exclusive: New Google Partnership a 鈥楽izable Investment鈥 in AI for Teachers /article/exclusive-new-google-partnership-a-sizable-investment-in-ai-for-teachers/ Mon, 23 Feb 2026 12:01:00 +0000 /?post_type=article&p=1028964 A top professional organization for teachers has inked a three-year deal with Google to offer AI training to 鈥渁ll six million K-12 teachers and higher education faculty鈥 in the U.S., an audacious undertaking by the tech giant that could reach millions of students and dwarf previous tech forays into education.

鈥淲hile Google’s been offering educational products for 20 years, this is a different moment for us,鈥 said Chris Phillips, Google鈥檚 vice president and general manager of education.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


He called the effort the largest for Google in two decades of working with teachers and students. Phillips didn鈥檛 immediately offer a price tag, but said it鈥檚 鈥渁 sizable investment.鈥

Chris Phillips

The training, offered through the ed tech-focused group , will include hands-on experience with Google’s Gemini and NotebookLM tools, offering certificates and digital badges.

鈥淲e have just heard so much feedback from teachers that are just saying, 鈥榃e are not prepared,鈥欌 said Richard Culatta, ISTE+ASCD鈥檚 CEO. 鈥溾榃e don’t have the training, we don’t have the background that we need for the realities of teaching in an AI world, both teaching in the classroom and also, secondarily, but equally as important, preparing students for the world that they’re going to be in.鈥欌

It鈥檚 the latest in a series of large-scale teacher training initiatives over the past few months. In July, the American Federation of Teachers, the nation’s second-largest teachers union, announced its own $23 million , partnering with Microsoft, OpenAI, and Anthropic to train up to 400,000 educators.

At the time, AFT President Randi Weingarten said the academy was a way to ensure that teachers, not technology, remain in control of the classroom.

But AFT’s partnership with OpenAI and Anthropic drew sharp criticism from educators and researchers, who questioned whether tech companies with products to sell and market share to protect are the right architects for teacher training. Education technology critic Audrey Watters called AFT’s academy 鈥渁 gigantic public experiment that no one has asked for,鈥 while ed tech analyst Alex Sarlin said tech companies were in a 鈥渓and-grab moment.鈥 

Microsoft has also launched its own community-based platform, Microsoft Elevate for Educators, offering free courses, live training sessions and credentials. 

Google itself in 2024 committed $25 million through its philanthropic arm to several nonprofits, including ISTE+ASCD, 4-H, and aiEDU, with particular attention to reaching underserved communities. Its goal at the time was to reach more than half a million K-12 and college students, as well as educators.

ISTE+ASCD 鈥 the group is a combination of two that merged in 2023 鈥 was the beneficiary of $10 million of the $25 million, saying it would collaborate with several other groups, including the National Education Association and the Computer Science Teachers Association.

Though Google has its own AI platform, Culatta insisted that the work won’t be about pushing specific tools, saying that kids need enduring AI skills as the tools change. 

Richard Culatta

In 2023 ISTE+ASCD introduced its own AI chatbot built on educator-focused content and trained solely on materials developed or approved by the organization. The chabot tapped into curated databases in a bid to give teachers routine access to high-quality research. 

In some ways, efforts like those of AFT and others reflect a lack of leadership at the federal level. The Trump administration, through an , has backed efforts to expand AI in schools, but has also eliminated the Office of Educational Technology, which long focused on making access to technology until Trump last spring.

Culatta, who ran the office under President Obama, said it鈥檚 important that organizations like ISTE+ASCD 鈥渟tep up when there are key needs that may not be filled at the federal level. And we just want to make sure that, regardless of where we would like some things to happen, at this point we just have to do all-hands-on-deck and make sure we’re supporting kids and teachers.鈥

鈥楳assive undertaking鈥 or waste of time?

The sheer scale of Monday鈥檚 announcement underscores how urgently educators see the need to learn about AI: RAND Corp. last spring found that the number of school districts training teachers on AI from 2023 to 2024, from 23% to 48%. Researchers predicted that as many as three-fourths of districts would be in the AI training business by the end of 2025. 

Robin Lake, director of the at Arizona State University, said the new partnership is 鈥渁 massive undertaking that is urgently needed right now. I hope it includes a research component so we can learn from it because much more is needed.鈥

Google鈥檚 Phillips said the company has 鈥渕ultiple arms of research happening all around the world鈥 and 鈥渨ill start to produce some of those and share them publicly where we’re doing studies鈥 in classrooms.

鈥淲e’ll see how the results land, but ultimately we want to improve learning outcomes,鈥 he said. 鈥淲e want to help change. We want to bend the curves on proficiency.鈥

Robin Lake (CRPE)

Lake, who has long urged schools to take AI readiness seriously, said school principals, district leaders and teachers-in-training 鈥渁lso need to be AI literate, as do students and families. We can鈥檛 rely only on private companies with an interest in AI products to fund and lead AI readiness.鈥

Others were more sharply critical of the new partnership.

Justin Reich, an associate professor of digital media at MIT and host of the podcast , said industry-sponsored professional development is, at its core, a 鈥渃ustomer acquisition鈥 campaign. Since ISTE+ASCD is historically both a membership-driven teacher organization and an industry trade association, he asked, 鈥淗ow can it be an honest broker to those two constituencies, while also launching an enormous initiative that privileges the products of one particular vendor?鈥

Google’s past educator certification programs, he said, 鈥渇ocused more on tool use and adoption than on learning,鈥 with no substantive evidence that improved student outcomes followed.

Phillips said its research is ongoing, but noted that its app is allowing students to self-pace lessons. 鈥淲here they struggle, they can dive deeper and learn more and get more up-to-date,鈥 he said. Among several unpublished findings, Phillips said, is one that found students spend more time on topics they鈥檙e struggling with and end up learning these topics more deeply. 

Culatta admitted that Google would of course like to see its products in the hands of teachers. But he said he and his colleagues 鈥渨ant to make sure that if there are products going to schools 鈥 and they already are 鈥 that they’re being used in ways that are really impactful.鈥

He added, 鈥淚f it was going to just be, 鈥楬ere’s how to use Gemini,鈥 Google actually doesn’t need us. We are coming in because Google is looking for somebody who can say, 鈥榃hat are really the best practices for learning with AI, not necessarily learning about AI?鈥欌

Google鈥檚 Phillips said teachers and students 鈥渃an choose other products in the market and so forth, but this program does come with using our products so that we can help teachers really get started, get going.鈥 

He noted a 鈥渟uper-generous free tier鈥 to make the tools widely accessible, and the training to use it. 鈥淏ut schools, districts, teachers themselves have choice, and I think that’s perfectly fine, but we want to play a role with not just providing tools, giving people access, but actually helping them apply it and use it鈥 to jumpstart 鈥渟afe, appropriate use of AI.鈥

Justin Reich

MIT鈥檚 Reich said his deeper concern is what he said is the near-total absence of evidence underlying AI professional development, either to teach educators how to use AI in their classrooms or simply to teach them how AI and large language models work.

鈥淟iterally no one on the planet understands how [AI] works,鈥 he said. 鈥淭he best computer scientists in the world cannot explain why LLMs generate plausible sounding text in a convincing theoretical framework.鈥

Reich recounted asking engineers at a Google DeepMind event in November whether they knew how to train junior engineers to use AI tools effectively in their work. 鈥淓very single person I talked to said, 鈥楴o,鈥欌 he said. 鈥淚f Google doesn’t know how to effectively use AI to write code, what is this business about teaching people AI literacy? We just don’t know.鈥

Benjamin Riley, a well-known AI skeptic who founded the think tank , was more blunt, casting the Google partnership as part of an ongoing process making ISTE+ASCD a 鈥渟hill鈥 for Big Tech.

鈥淚 admit I’m fascinated to see the major Big Tech companies competing so vigorously to control 鈥榯he education market,鈥欌 Riley said. 鈥淥penAI is giving away their premium model to teachers (until they won’t), and now Google is doing whatever this is.鈥

Benjamin Riley

In the past, Riley has questioned whether offering teachers and students skills such as 鈥淎I literacy鈥 and 鈥淎I readiness鈥 are effective, even as many others warn that they鈥檒l be essential.

鈥淚 guess I’d credit their clairvoyance a tad more if ISTE+ASCD had not claimed, as recently as just a few years ago, that 鈥榯he future鈥 would also demand that everyone . Oops!鈥

Riley, who also founded the cognitive science advocacy and research group , predicted that much of the training will end up wasting teachers’ time, Google’s money and ISTE+ASCD’s relevance. 

鈥淗uman beings have evolved to learn from each other in the context of our relationships. This is the superpower of our species, and the kids who’ve grown up in the past 20 years are increasingly disgusted by what tech has done to them personally, and society more broadly. They are not happy about the world we’ve given them, and their voices are growing ever louder.鈥

Culatta, for his part, said AI 鈥渋s not going away. Does learning happen with people connected with each other? Sure. It’s not the only way learning happens, but it’s a very important way. And we actually think AI can help make those human-to-human learning experiences much better.鈥

]]>
Opinion: America Is About to Be Graded on AI Literacy. We Are Not Prepared. /article/america-is-about-to-be-graded-on-ai-literacy-we-are-not-prepared/ Sat, 21 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028727 In 2029, a global spotlight will turn to how well U.S. students are prepared to understand and use artificial intelligence. For the first time, the Programme for International Student Assessment or PISA will treat AI literacy as a core competency, it alongside reading, math and science.

That is not an abstract milestone for researchers or policy circles. PISA is a premier scoreboard used globally to compare how well countries are preparing young people for the future. When AI literacy becomes part of that scoreboard, it will send a clear message about who鈥檚 ready and who鈥檚 not.

The warning signs are already there. The latest PISA results place U.S. students at roughly 28th in mathematics, 6th in reading, and 10th in science among peer nations. Taken together, those rankings paint an uncomfortable picture. By international standards, the United States is already falling behind in areas that will define economic competitiveness in the years ahead.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Based on my experience as a former state commissioner of K-12 education, America is not anywhere near ready to top this list when it comes to AI literacy. If we stay on this trajectory, we may not even make the top 30. Are we ready for this level of embarrassment on the global stage for a technology we largely created?

The problem is not that we lack innovation. Innovation is part of our national identity. The creation of transformational tools is woven into our nation鈥檚 history, and AI may prove to be the most revolutionary technology yet. The real problem is that we are not urgently preparing ourselves for the changes AI will bring. At this time, America has no real plan to prepare all our students and educators with anything close to the consistency and urgency this moment requires.

Our country鈥檚 patchwork system of state-led educational approaches and requirements is a big reason why. A student鈥檚 experience with advanced technology like AI depends largely on their ZIP code, their school district and whether educators have been given the training and support to teach this material well. In some schools, teachers are moving forward with thoughtfulness and energy. In others, staff are frozen by uncertainty, lack of training, or fear about what could go wrong. Many districts still have no clear guidance at all.

Local control has long been one of America鈥檚 strengths. But in this case, local control may be becoming a liability. When it comes to AI literacy, our system is both inefficient and inequitable. It means some students will graduate fluent in the most consequential technology of their generation, while others will be left to their own devices. In the future of work, that gap will matter.

I do not believe AI will replace teachers. Teaching is built on human relationships, trust and the ability to motivate young people. But I do believe people with AI skills will replace those without AI skills. Industries will shift. Some jobs will disappear, others will emerge, but one thing is clear: The students who can use AI responsibly and effectively will have a distinct advantage in the future economy.

That is why AI literacy is not a luxury. It is both an economic issue and an equity one.

So what should we do, and why now?

Let鈥檚 use the 2029 PISA timeline as a collective spark to give our kids the best opportunity anywhere in the world. Three years is not a lot of time in education. Curriculum adoption takes time. Teacher professional development takes time. Building sensible policies takes time. Let鈥檚 embrace this moment in time to instill urgency in everything we do. 

It鈥檚 time to shift off the path we too often do in education: scramble, improvise and widen the very gaps we claim to care about closing. Instead, let鈥檚 work together to develop a true national AI literacy framework, paired with a basic shared approach to assessing progress.

That does not mean federalizing classrooms or punishing schools. A national framework is about consistency and responsibility. It ensures every student learns the fundamentals, regardless of where they live, and it helps educators know what good looks like across grade levels.

AI literacy also needs to be defined clearly. Young people must understand what AI is and what it is not. It is not a human. It is a prediction machine. That distinction matters, especially now that many students are interacting with AI companions. Some of those tools have already been linked to serious harm. Kids deserve straightforward education that helps them navigate this technology safely.

If that sounds like a lot to teach, it is. But we鈥檝e done something similar before with other powerful tools, like computers in classrooms and use of the internet. Those things helped us be more efficient, and more importantly, they helped educators focus on the critical job of teaching.

This is critical, because we must also provide support for our educators if we expect students to be ready for the 2029 PISA test. AI has real potential to improve teaching and learning, but only if educators are trained and given clear guidance on how to use it responsibly and effectively. Without that preparation, we cannot expect consistent outcomes for students.

The same is true for families. Students鈥 use of AI does not stop at the schoolhouse door, and parents need the tools and understanding to support responsible use at home. Schools and families must be aligned if students are going to develop the skills and judgment this technology demands.

The encouraging news is that this should be common ground. Regardless of politics or geography, we share a responsibility to prepare young people for the world they are entering. What鈥檚 needed now is a shared national commitment to AI literacy that creates urgency around implementation and ensures that by 2029, students and educators alike are prepared, confident, and competitive on a global stage.

America invented this moment. Now we need to teach our children how to lead in it.

]]>
鈥楽tage Is Shifting Rapidly鈥 for High Schools: Are States Helping Them Keep Up? /article/stage-is-shifting-rapidly-for-high-schools-are-states-helping-them-keep-up/ Wed, 18 Feb 2026 10:30:00 +0000 /?post_type=article&p=1028617 Updated Feb. 18

The rise of artificial intelligence and other technology has traditional high schools scrambling to keep up 鈥 with states doing an uneven job of encouraging schools to embed critical thinking skills, and offer students access to internships and college courses, according to a new report.

Today鈥檚 world, the nonprofit XQ Institute argues in its new report , 鈥渞equires an entirely new kind of educational experience 鈥 one that traditional high schools were never designed to deliver,鈥 the report found. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淲e live in an age of self-driving taxis, blockchain, and renewed interest in space exploration. The public launch of ChatGPT placed a powerful form of generative artificial intelligence (AI) within the reach of every American,鈥 the report continued. 鈥(The) stage is shifting rapidly. Our young people are growing up at a time when the economy and workforce are in constant flux. And high schools must keep pace.鈥

Schools not only need to emphasize work and early college experiences, XQ found, but also teach interpersonal and thinking skills as much as academics.

鈥淲hat do we need to know when we leave our high school doors?鈥 asked XQ CEO Russlynn Ali. Math, English and science are still important, she said.

鈥淏ut layered on top of that, we need to be critical thinkers,鈥 Ali said. 鈥淲e need to be able to collaborate. We need adaptability. We need these skills that will help us succeed in life, no matter what direction we choose after we leave high school.鈥

XQ wants states to encourage schools to follow the lead of Purdue Polytechnic High School in Indianapolis or the Museum High School in Grand Rapids, Michigan, where students learn academics and interpersonal skills through projects, not lectures. Another standout: Oakland, California鈥檚 Latitude High School, where every 10th grader follows an adult through a work day to learn about the job, 11th graders have month-long internships and seniors can choose to do a longer one.

The new report takes a different approach from XQ鈥檚 previous work, which has centered on schools.

鈥淪tates have more responsibility and authority over their schools than certainly in recent memory, if not in my lifetime,鈥 Ali said. 鈥淭hey must be the locus of change.鈥

XQ Policy Actions map. View the fully interactive map at for more information about each state.

States are mixed however, XQ reports in the new study, in how they are succeeding in meeting 10 goals XQ considers key to school innovation. XQ met with school leaders across the country to create the goals 鈥 and then researched how much progress each state and Washington, D.C., has made toward them:

  • 46 states have met the goal of offering work experience, such as internships, as credit toward high school diplomas.
  • 38 states give every student a chance to earn college credit before graduating, by taking Advance Placement, International Baccalaureate or college classes.
  • 32 states give schools the ability to award students class credit under a mastery or competency system showing they know the material, instead of just attending a class. 
  • 32 states have identified key skills students need to learn for the future, including non-academic skills XQ has made a major part of its work, such as teamwork, critical thinking and problem solving. States often created a 鈥淧ortrait of a Graduate鈥 spelling these out.
  • Just 10 states 鈥 Indiana, Michigan, Minnesota, North Dakota, Oklahoma, Rhode Island, Utah and Washington 鈥 met six of the goals; and no state met all 10, though 31 met at least four. Two states 鈥 Alaska and Florida 鈥 met only two of the goals.
  • Two of XQ鈥檚 goals 鈥 finding ways to measure how well students have learned interpersonal and thinking skills, then showing those on report cards  鈥 haven鈥檛 been realized by any state.

XQ plans to track changes and update the report every two years for the next decade.

鈥淚 think of these as a start, definitely not a finish line,鈥 Ali said.

To highlight the 10 policy goals and encourage states to adopt them, XQ is planning to visit schools and policymakers in 25 communities, likely over the next two years. Details of that tour, which starts March 4 in Indianapolis and stops in Columbus, Ohio, the week after, are still being developed.

XQ, a nonprofit and affiliate of investing and philanthropic firm Emerson Collective, was co-founded by Ali and Laurene Powell Jobs. Powell Jobs is Emerson鈥檚 founder and president, and wife of the late Apple founder Steve Jobs.

XQ has been refining its vision for redesigning high schools since launching in 2015 with a well-publicized campaign to identify and support innovative 鈥淪uper Schools鈥 across the country. It gave a total of $102 million in 2016 to 18 schools 鈥 including the schools mentioned above 鈥 before expanding its work to 28 states.

XQ鈥檚 vision has its , who say it and who are to prepare students. But school districts and several states, including Indiana, Rhode Island and Utah, agree with the approach and are open in their support.

Utah鈥檚 state superintendent Molly Hart said the state rarely adopts any national approach, but there is great overlap in what XQ promotes and the state鈥檚 push to redesign high schools, including the support of mastery teaching approaches and requiring students to earn a meaningful professional credential before graduating.

鈥漌e align closely when you look at some of the goals and policy actions that XQ does,鈥 she said. 鈥淲e have a lot of similarities in what we’re looking at.鈥

The report, and shorter reports XQ released for individual states, also highlight policy changes and efforts already in place that XQ considers 鈥渂eacons鈥 for change. Among them:

  • Indiana: For giving schools increasing flexibility in giving students class credit for showing proficiency in a subject, rather than just sitting through a class all semester or year. 
  • Rhode Island: For changing diploma requirements so that all students, beginning in 2028, must take the courses in math, foreign language and even art that qualify them to attend college.

    鈥淥ur kids were not even taking the classes to be able to apply to those schools,鈥 said state education commissioner Angelica Infante-Green. 鈥淥nce they got there, they were in remedial courses because we weren’t preparing them for college level achievement.鈥  
  • Texas: For allowing students to earn 12 hours of college credit in high school, either through college, AP or International Baccalaureate classes.
  • Colorado: For encouraging the growth of CareerWise high school apprenticeships, the largest youth apprenticeship program in the country. Colorado also broke career preparation into three categories 鈥 Learning ABOUT Work, Learning THROUGH Work, and Learning AT Work.  
  • Utah: For giving schools grants to train teachers how to educate students using a mastery/competency approach; and how to rate student progress. Utah also backed some schools in trying out vastly different report cards 鈥 keeping the traditional A-F grade scale, but also giving students a new Mastery Learning Record that shows their progress on durable skills.

Ali said XQ also wanted to highlight two goals that haven鈥檛 been met yet, but that she considers vital 鈥 developing tests to measure how well students have learned key non-academic skills and then changing student report cards to rate students on those skills.

Ali said the standardized tests states use to measure student skills in math, English and science offer some sense of what students know, but are outdated. There鈥檚 no clear way yet to assess how well students have mastered durable skills to prove to colleges or employers they have those skills. And Ali said that schools tend to prioritize learning the state measures and judges them on, so schools won鈥檛 teach them vigorously until they are part of report cards and school ratings.

But XQ recognized 12 states for trying to develop those tests and report cards, six of them for participating in a pilot project with the Educational Testing Service, the Carnegie Foundation for the Advancement of Teaching and the Mastery Transcript Consortium (now part of ETS). The Skills for the Future project has been working to create tests on durable skills, starting with three — collaboration, communication and critical thinking.

XQ is not part of this effort, but partners with Carnegie on some related work, and says it enthusiastically backs it.

The Skills for the Future team, which includes Indiana, Missouri, North Carolina, Nevada, Rhode Island and Wisconsin, is still working on creating new tests but recently broke down each of those three skills into smaller skills as one step toward creating tests.

Communication, for example, is broken down into segments 鈥 presentation skills, making messages more clear, adapting messages for different audiences or comprehending communication of others 鈥 that are then broken down further into sub-skills.

Infante-Green said measuring these skills will be a 鈥済ame changer.鈥

鈥淚 think it will give employers things that they have been looking for, as well as change how we teach, what we teach, and how we incorporate (those skills) into the academic field,鈥 she said. 鈥淚t’s important. It won’t be one or the other, it’ll be both.鈥

Ali also stressed that just passing policy changes won’t be enough. Schools, teachers and parents need to also be on board.

鈥淚t’s not a checklist,鈥 Ali said. 鈥淚t has to be implemented in a way that is sustained and empowering and supportive of what needs to happen in the classroom.鈥

Disclosure: XQ provides financial support to 社区黑料.

]]>
At These Universities, Using AI Isn鈥檛 Shunned 鈥 It鈥檚 a Graduation Requirement /article/at-these-universities-using-ai-isnt-shunned-its-a-graduation-requirement/ Tue, 17 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028557 While most colleges and universities are reluctantly grappling with of artificial intelligence, a few are not only tolerating it but making it part of their core curricula. In the process, they鈥檙e signaling to new students that using and critically evaluating AI will be a large part of their post-college lives.

Indiana鈥檚 Purdue University in December approved an AI 鈥渨orking competency鈥 , saying that by the time they earn a diploma, undergraduates must be able to use the latest AI tools effectively in their chosen field while understanding both the technology鈥檚 strengths and limitations. 

Graduates must also be able to defend decisions informed by AI while sussing out its 鈥減resence, influence and consequences鈥 in their work.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


鈥淭he root of all of this is really making sure that our students are ready for the workforce and are not left behind by AI,鈥 said , Purdue鈥檚 senior vice provost for academic and student success. While admitting that college students likely rely on AI for class assignments, she said what鈥檚 missing is the ability to go deeper. 

鈥淵es, they know how to use it, but are we instilling a framework and a practice where we’re emphasizing critical thinking?鈥 she said. 

The long-term goal of the effort is to ensure that graduates are 鈥渨ildly successful in an AI-enabled workplace,鈥 while being able to evaluate AI-generated work and criticize it. 

A microbiologist by training, Oliver-Jischke said AI has already 鈥渞evolutionized鈥 her field. Recent research suggests that AI-enabled analysis of large genomic data sets, for instance, is allowing scientists to look at DNA directly from environmental samples, revealing of previously unknown microbes.

鈥淭he technology is here,鈥 said Oliver-Jischke. 鈥淵ou will lose out on opportunities if you don’t understand it or know how to utilize it and apply it effectively.鈥

Purdue鈥檚 faculty and curriculum committees began discussing the new requirement last summer, she said. The university has already identified 35 courses that will lead the way toward fulfilling the requirement. It goes into effect fully for the graduating class of 2030, who are due to arrive on campus in the fall. It won鈥檛 require a separate exam or course, but rather it will be embedded into students鈥 required coursework, she said.

Haley Oliver-Jischke

While it鈥檚 unusual, Purdue鈥檚 move isn鈥檛 unprecedented. 

In January 2025, the State University of New York system its information literacy curriculum to include requirements that SUNY students effectively recognize and ethically use AI. While it integrates AI into an existing requirement, it doesn鈥檛 create a standalone competency like Purdue鈥檚.

In June, The Ohio State University unveiled its initiative, which will embed AI education 鈥渋nto the core of every undergraduate curriculum, equipping students with the ability to not only use AI tools, but to understand, question and innovate with them 鈥 no matter their major.鈥

Both Purdue and Ohio State are public , founded within months of each other in 1869 and 1870, respectively, to meet what was at the time a booming demand for agricultural and technical expertise. 

Ohio State鈥檚 AI effort will require all graduates, beginning with the class of 2029, to be 鈥渇luent鈥 in the technology and how it can be responsibly applied to advance their field. 鈥淚n the not-so-distant future, every job, in every industry, is going to be impacted in some way by AI,鈥 Walter 鈥淭ed鈥 Carter Jr., the university president, said at the time.

Executive Vice President and Provost told 社区黑料 that as AI continues to influence how we work, teach and learn, 鈥渨e will remain at the forefront of this technology.鈥 

Is 鈥榲ibe coding鈥 the future?

The moves come as recent surveys suggest that college students are already making AI a large part of their education, even if they鈥檙e mostly outsourcing hard work: The AI and plagiarism detection platform Copyleaks in September found that of college students have used AI for academic purposes, with 53% using it either daily or several times a week. 

While most students say they use it for brainstorming, half use AI to draft outlines and 44% to generate actual drafts of work. About one in three students uses AI to summarize readings.

In light of statistics like these, requiring a deeper competence around AI is 鈥渁 good step in the right direction,鈥 said Alex Kotran, CEO of the . 鈥淐losing out 2025, I was feeling like post-secondary is sort of deer-in-the-headlights鈥 when it comes to AI. 鈥淭his is promising, but the proof will be in the pudding: Are they building the systems for professional development and learning, because that’s going to be critical. The policy is just step one.鈥

Kotran noted that the vast majority of job postings now specifically name AI skills as a requirement. Colleges that are seen as more effective at helping students get those skills are likely producing 鈥渕ore employable鈥 graduates.

Purdue鈥檚 Oliver-Jischke said the focus at the university, which enrolls , is on 鈥渨orking competencies鈥 and how they can fit into instruction across departments. 鈥淭his can be a large boat to turn, but because we have a commitment to AI and this is obviously a massive STEM school, everybody is curious, interested and willing to explore how this should be implemented into the core curricula.鈥

At the same time, she said, AI is evolving quickly and the landscape could soon be very different. 鈥淲e recognize that, and we want to remain nimble,鈥 she said. 鈥淎nd we will keep our curricula nimble to do that.鈥

Alex Kotran

The two schools鈥 focus on differentiated, workplace-specific use of AI is a smart one, Kotran said. But to be effective, universities should go beyond simply relying on off-the-shelf commercial products. 鈥淭he future of work is not a bunch of employees using ChatGPT or Gemini day-to-day and being more productive because of that,鈥 he said.

Instead, the real value of AI, at least for now, is in the custom software it enables users to build via what鈥檚 known informally as 鈥,鈥 or using AI prompts to do the actual behind-the-scenes coding that once took advanced knowledge. 鈥淭he real unlock comes when you’re building custom software to do stuff more efficiently,鈥 he said.

Since generative AI came to market in 2022, the cost of building apps, websites, games and other software has dropped precipitously, while the task has gotten easier for non-technical users. 

鈥淭hat’s going to change the way we work,鈥 Kotran said. The more users can develop and control their own software, the more productive they鈥檒l be. 鈥淏ut it’s very hard to get that insight if you haven’t seen vibe coding for yourself.鈥 

Done right, the efforts at Purdue and Ohio State could be significant, Kotran said. 鈥淚t just increases the exposure that students are going to get to having the opportunity to build that intuition and to experiment,鈥 he said. 鈥淎nd it will force professors to start building their assessments around it.鈥

]]>
Opinion: Schools Need to Adopt Clear Rules for AI Use. Parents Can Help Make That Happen /article/schools-need-to-adopt-clear-rules-for-ai-use-parents-can-help-make-that-happen/ Tue, 10 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028367 It has been over three years since ChatGPT launched, bringing artificial intelligence to the masses for the first time. Today, AI is reshaping schools, workplaces and entire industries. Yet only   鈥 approximately  鈥 have district-level AI guidance.

The communication gap is stark. found that 26% of teenagers ages 13 to 17 used ChatGPT for their schoolwork in 2024, up from 13% in 2023, yet most lacked formal instruction on responsible use. According to the , nearly three-quarters of parents report that their children’s schools haven’t shared their AI policies. 


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


This lack of guidance creates two dangerous extremes: students who fear AI because it鈥檚 been branded as cheating, and those who misuse it as a shortcut because they鈥檝e never been taught otherwise. In both cases, young people miss the opportunity to practice the critical thinking, problem-solving and ethical judgment skills regarding AI that education is meant to foster 鈥 in other words, to develop AI literacy. 

As a researcher, educator and parent, I have worked to in colleges and medical schools. But I do not see the same efforts in most K-12 schools. Advocacy is key, and parents can help make this happen.

My son discovered ChatGPT in seventh grade. Three years later, his South Carolina school district still offered no clear guidelines for AI use, so I began a methodical advocacy campaign. I attended a superintendent’s coffee chat, shared AI education books with district leaders and followed up with emails and a virtual meeting. For months, it seemed as if my efforts had fallen on deaf ears. Then, I was invited to join the district’s AI planning team, a diverse group including students, teachers, parents, administrators, and AI education consultants. Our daylong session covered generative AI applications, ethics in education and guideline development. 

Following the meeting, we participated in a survey and observed a school board presentation on AI policy development. And in January, the district Board of Trustees governing the use of artificial intelligence in classrooms.

This experience taught me that parent voices matter. But effective advocacy requires patience, persistence and a constructive approach. Fortunately, families wanting to get involved have proven models to follow.

In , the state’s official AI Framework for Education emphasizes ethical use, transparency and family engagement, with guidance for schools to communicate clearly with parents about AI tools. In , the school board voted in 2025 to begin developing districtwide guidelines for classroom AI use, including the creation of family-facing resources to promote responsible use at home. 

Resources like offer a strong foundation for AI literacy advocacy. The handbook encourages parents to stay informed about new technologies, ask questions when schools lack clear guidelines, build relationships with staff and participate in school meetings to influence policy. These efforts can open doors to influencing policy and curriculum decisions.

Parents also can advocate for their school district to join initiatives like the which aims to train 400,000 teachers nationwide in AI fluency by 2030. They can push for partnerships with nonprofits like and , which provide free, grade-appropriate AI curricula, teacher training and ethical use frameworks. If the school district is open to collaboration, they can also request a pilot or demo for tools like , a platform that provides access to multiple AI models in one place with a focus on education. Boodlebox offers to help cover the cost of subscription. 

Local AI councils  鈥 groups of experts from fields such as law, technology, and education who advise local governments on using AI responsibly 鈥 provide another avenue for parent involvement. In Montgomery County, Pennsylvania,  the brings together experts from the private sector, academia, public service and beyond. In Montgomery County, Maryland, officials formed an to 鈥漞nsure the successful evaluation, coordination, implementation and adoption of AI solutions,鈥 in the county. Parents can encourage their districts to establish similar advisory committees or collaborate with such county-level groups if they already exist in their area. 

Through this process, I’ve compiled a comprehensive list of that parents can use as conversation starters with their districts 鈥 from state frameworks to nonprofit curricula 鈥 categorized by audience: administration, teachers and students. I also keep an eye out for grant opportunities for my district. For example, the recently opened applications for the 2026 program, which helps high school educators gain AI knowledge and skills that they can take back to their computer science, science, mathematics and health classrooms.The stakes couldn’t be higher. Without AI literacy, students will struggle to navigate a world increasingly shaped by artificial intelligence. They’ll lack the ethical framework to use these tools responsibly and will enter college and the workforce at a significant disadvantage compared with peers who received proper guidance. Momentum is building, but districts won鈥檛 act without parent demand and involvement. If parents don鈥檛 push for AI literacy now, they risk raising a generation fluent in fear or shortcuts rather than the skills that matter and the resilience needed to thrive.

]]>
Opinion: It’s Time to Embrace AI Literacy for Kids /article/its-time-to-embrace-ai-literacy-for-kids/ Sun, 08 Feb 2026 11:30:00 +0000 /?post_type=article&p=1028182 Artificial intelligence has become an incredibly polarizing topic, with one side eager to integrate it into every aspect of life and the other side running from it as fast as they can.听 Is this new technology an existential threat or a transformational opportunity? According to Pew from September, 鈥淎mericans are more concerned than excited鈥 about the proliferation of AI and want to exert more control over its use.

About 62% of U.S. adults report interacting with AI several times a week, and adults and children alike engage on a regular basis with AI without even realizing it. Children are growing up in a world where this technology is unquestionably a part of daily life, shaping their lives in ways no one can yet fully understand. Giving them a clearer understanding of how AI works has never been more important.

This fall, the three of us met at an event at the National Children鈥檚 Museum which brought together technology leaders, museum educators, policymakers, teachers and academic researchers focused on guiding our kids safely and productively into our technology-driven world.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Our key takeaway? Regardless of where you stand on this issue, a common ground must be forged now. Constructive dialogue must happen, and it needs voices from both sides to produce a healthy outcome for our children. Helping kids understand AI means being both optimistic and cautious, recognizing its promise while acknowledging its shortcomings and risks.

What if, alongside helping our youngest learn to use AI, we placed greater emphasis on teaching them how it works? By nurturing children’s critical thinking skills, we give them the power to understand it as a tool鈥攚here it can augment human effort, and where it fails miserably.

AI is ushering in a new wave of innovation, but it is also enabling new forms of deception and manipulation. It provides access to a wealth of knowledge and opportunities, but the resulting information overload can undermine learning, cognition, creativity and human connection.

Society as a whole, from educational institutions to policy makers to parents at the dinner table, need to invest in children鈥檚 AI literacy now. In doing so, we can instill some of the most important lessons: how to be creative and discerning in the world in which we live, preparing them for a future full of new opportunities.

According to the World Economic Forum鈥檚 Future of Jobs Report, employers expect that 39% of workers’ core skills will change by 2030, with technological skills gaining importance most rapidly. AI will open up new fields of biomedical research. It will help us feed our growing global population. But it will also force many of us to rethink our jobs and educational pathways.

So, on a global scale, an investment in our children鈥檚 AI literacy not only ensures a competitive workforce but also safeguards national prosperity, security and the responsible use of powerful technologies. Whether you think AI is exciting or threatening, children must be introduced to age-appropriate concepts about it so that they can build fluency and prepare for the future.

Another takeaway from our conversation? Adults must learn alongside 鈥 and sometimes from 鈥 our kids. As adults, we have the responsibility of fostering children鈥檚 safe use of this powerful tool. But let鈥檚 give ourselves the grace to acknowledge that we don鈥檛 understand AI either.  We didn鈥檛 grow up with it, and experts and technology leaders believe that generative AI has surpassed the understanding of its creators.

There is a window of opportunity to bring everyone to the table. As parents, educators and lifelong learners, we need to have deeper conversations about AI 鈥 especially how it shapes children鈥檚 learning, development and daily lives. We don鈥檛 have to fully comprehend it or agree with all its intended uses; we just have to be open to talking about it and taking action. By approaching this with curiosity, we can thoughtfully consider appropriate uses and guardrails for kids鈥攕omething we didn鈥檛 do early enough when America鈥檚 children first began using online tools like social media.

There are organizations starting to address AI literacy and technology education for families. Sesame Street and Google collaborated to release a on the healthy use of digital technology. Common Sense Media, with support from the National Parents Union and EDSAFE AI, has a series of about digital citizenship and AI arranged by grade level and a for parents as well. The website provides research-based articles, podcasts and other resources to help parents navigate age-appropriate technology use. Children鈥檚 museums are developing hands-on, screen-free experiences to help demystify the processes underlying AI. There needs to be more of this, supporting children鈥檚 understanding of the fundamentals, not just how to use its applications.

AI’s purpose is not to replace human life, but to enhance it. Yet, the current conversation 鈥 especially around children鈥檚 use of AI 鈥 is too passive, treating these complex systems as inevitable rather than intentional creations. Educators, industry leaders and policymakers need to insist on a richer, more engaging dialogue about how it shapes kids鈥 learning, choices and experiences. 

Whether it’s the weather report from a smart device or personalized help from a chatbot, AI literacy is now essential for young people to navigate civic life. No matter your viewpoint, it is time to embrace AI literacy. The stakes are too high for anything less than universal, active participation in preparing children for the world they鈥檙e inheriting and will soon lead.

]]>
The Dangers of AI Toys: Why This Teddy Bear Was Canceled /article/the-dangers-of-ai-toys-why-this-teddy-bear-was-canceled/ Fri, 06 Feb 2026 21:54:45 +0000 /?post_type=article&p=1028333
]]>
Reflections on Whether AI is Actually Changing Schools 鈥 and Where /article/reflections-on-whether-ai-is-actually-changing-schools-and-where/ Thu, 05 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028147 Class Disrupted is an education podcast featuring author Michael Horn and Futre鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode, Michael Horn and Diane Tavenner step away from their interviews to reflect one-on-one at the midpoint of their season on artificial intelligence in education. Diving into its evolving role in the classroom, they ask whether AI is truly transforming the system or simply being layered onto outdated structures. They explore a framework of three school models and discuss the challenges of meaningful innovation amid existing accountability systems and education policies. From these models, Horn and Tavenner analyze how one might expect transformational change to occur in K鈥12 schooling 鈥 through traditional schools incrementally changing and evolving over time or, as they argue, through fundamental migration away from the existing system.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good that you came to Boston and in the freezing cold weather, no less, to hang out a little bit with me here and have a conversation.

Diane Tavenner: It’s really fun to be in person. We haven’t done this for a long time and the timing worked out perfectly because we are in the midst of this super interesting season where we’re exploring AI and education. And we’ve had several touch points where I’m like, oh, my gosh, there’s so many things that are coming up for me that I want to talk with you about. And so we get to have a conversation, the two of us, this morning.

Michael Horn: I am looking forward to it. And I’m sure you’re going to say things. I’m going to say, wait a minute, I think I know what you mean, but double click on that. Tell us more. And so I’m excited to go deep on wherever you want to go because the conversations, they’ve both been illuminating, but they brought up more questions for me, as seems to be constantly the case with this topic.

AI Disrupting Education Processes

Diane Tavenner: Indeed. Indeed. Okay, well, let’s dive in. And I had the great pleasure of spending time with you in your class yesterday. Thank you again, so much fun. And one of the topics that came up was this idea of. I think it turned out to be more provocative than I anticipated it to be. But this idea that I started said, you know, one of the things, a phrase I read almost constantly right now and hear everywhere is AI is changing education.

And I don’t believe that that phrase is true or accurate. And in fact, I believe AI is not changing education. And, and so I want to dig into that idea a little bit. You know, I would argue that it’s creating a lot of problems for folks in education who are sort of in the traditional model of schools. But I don’t think it’s changing education yet. And what do you think about that?

Michael Horn: I largely agree. So I’ve been thinking about this, but a different wavelength because I’ve been seeing over X and the various pundits. There’s a lot of conversation right now of banning cell phones in schools, as you know, and there’s a lot of conversation of not just cell phones, but screens, period, you know, Google Classroom, all the rest, because it creates access to all these other things, ban it all sort of things. And then you see the occasional commentators saying, does anyone ever believe otherwise at this point?

Diane Tavenner: Right.

Michael Horn: And I had this moment because I think I’m seen often as the tech guy in education. But if you read Disrupting Class, what we actually say is that just layering tech over the existing system is not going to do anything.

Diane Tavenner: Right. I think we’re going to get to that idea in a moment.

Michael Horn: So I think so I guess my instinct is, I agree with you. Like I think we’re layering a lot of AI over existing processes. It’s breaking, frankly, a lot of education. So the one push I might have on you is it may be creating the impetus to ask some bigger questions. And, and I’m not just saying I’m not going down the road of just because the world is AI, therefore this should be AI but like legitimately, you know, we have current assignments where you can now hack them through AI. That’s called cheating. And all of a sudden everyone goes in a tailspin.

Well, let’s ask some questions about the assignments and the work itself is sort of my take from that. So I think it might be an interesting push. But I agree most of what AI is doing right now is layering over existing processes. Some of them, I suspect it’s making more efficient. Great, maybe some of them I think it’s exacerbating problems that already existed. Is that what you have in mind or.

Diane Tavenner: That is what I have in mind. And you brought up, you know, the, one of the biggest conversations is about cheating. Right now we’re seeing all these distortions and strange behaviors and blue books returning. And I’m sure the company that makes those is happy about that. But you know, they might be, they’re.

Michael Horn: Still around or they have to resuscitate. We should look that up.

Diane Tavenner: Yeah, I think when I think about it, what’s happening with this idea is that everyone knows that they’re supposed to have an AI policy and strategy now, but most people don’t. And so this is confusing. And a lot of people, I think AI in education right now is very kind of one offy. Like people, individual people pulling it in and people, you know, and so it’s not coherent, it’s not a strategy. We see it in sort of, you know, lesson planning and assignment making, which is related to, you know, why are we even teaching what we’re teaching to your point? And if you can cheat on it, then what are we trying to do? And then it goes down the line to a lot of fear that I think it’s injecting everything from these very high profile cases we’re seeing of suicide that, you know, is potentially induced by the AI to, to big widespread data privacy. So all of that to say, I’m hopeful. I believe the technology itself, if deployed, can actually change education. But I think humans are going to have to do that redesign and that deployment in a really strategic, thoughtful way for it to change.

Otherwise, I just think it’s plaguing us with problems.

Michael Horn: Yeah, I think that’s right. And systems structures, models, matter and processes and, you know, they’re sort of automating or, you know, playing off the existing ones. We may have a small disagreement on one thing. I’m curious about this. So, like, we don’t have many disagreements, so I’m gonna lean in if we do. I do think, so the Blue book comment aside, I can imagine that there are things we want to do in the classroom that have no AI at all involved with them, because some foundational knowledge or skill that a student can hack using AI out of the classroom is something that they actually should still work on in an analog way to create automaticity on that.

Diane Tavenner: OK.

Michael Horn: I don’t know if that’s Blue Books or what form factor. I’ll take the point there, but I guess that’s. I suspect if we break things down, there are still some foundational things we would want students to have to wrestle with that might not involve AI and be offline, if that makes sense. And then my take would be, okay, but don’t stop there. Now what are we going to use AI to create as opposed to consume with AI?

Diane Tavenner: I think that’s right. I really loved the conversation we just had with Laurence where he brought up some really interesting examples, to your point, of, you know, young people literally working together and in dialogue and, and then he talked about how AI could be supportive and enhance that. But to your point, the actual skill of having that conversation with another human and what you’re talking about is not about AI, so completely agree with that. My concern is when people are taking, you know, very old assignments and.

Michael Horn: And just dusting them off without any thought. Yeah. And I think I also think this gets the older you go, as in, I could be wrong about this. And this is, I’m sure, overly simplistic, but I think for a younger student, and, you know, I’ve got kiddos still in elementary school, so I’m still thinking a lot about that. I do think, like, that part of the landscape looks different from the older student in high school and college that, you know, it’s more problematic when you’re just dusting off that assignment, perhaps for that student.

Diane Tavenner: Right.

Michael Horn: But I do think, you know, developing number sense and automaticity with those things offline before you introduce the calculator and AI and so forth. That makes a heck of a lot of sense for a younger student. And so it’s as always with these conversations in education, I think we sort of make a statement and think it applies everywhere and there is nuance there.

Clarifying AI’s Role in Education

Diane Tavenner: That’s exactly where I’d like to go next because, so I think the dialogue around AI and education is complicated right now. And I hear a lot of people talking past each other and over each other because I think we’re using these very broad, sweeping general terms. So, for example, AI and education, and I was with a really great group of people a couple weeks ago and fortunately some really, you know, smart people noticed this talking past and talking over and called it out. And literally we went around this room and we were like, what do you mean by AI in education? And just within seconds we surfaced. Oh, well, you know, using LLMs like GPT and Claude and Gemini for instructional or operational support, using AI powered education apps, Khanmigo, Class Mojo, Magic School, AI policy development, you know, AI literacy lessons for students. And, people are literally using the phrase AI strategy, AI and education AI to mean all those things and more. And, and I’m finding that it’s very complicated to try to have meaningful dialogue when there isn’t a definition right now or people aren’t. We don’t have specificity yet.

I mean, I think some people don’t even know what AI is.

Michael Horn: Yeah, you’re probably right.

Diane Tavenner: Yeah, yeah.

Michael Horn: And it’s probably extremely fearful in those quarters. And the social media analogy is rampant right now as a result, probably because we’re not defining or breaking down. I mean, do you really not want AI to help an administrator better communicate or schedule or like really, that seems crazy, for example, on that end of it.

Diane Tavenner: And my sense is that what jumps to most people’s mind when they think about AI in education, we’ve sort of railed against this from the beginning, is literally how a student is engaging with it either in the classroom or at home. And most people have in their mind some version of some chatbot, generally speaking, which is incredibly narrow and limited, I think. And you just gave a good example of like, we could literally never bring it directly into the classroom with students. And there’s a million different uses for it in just running something as complicated as a school and a school system. And so, yeah, I guess this is just my plea for us collectively to start developing a more specific vocabulary, more intentionality. About what we mean. Let’s stop saying we’re doing AI.

Oh my gosh, everyone’s doing AI. What does that mean? And being really specific about it. And I think for me, I just want to flag as we go through the rest of this season because we’re going to have some really interesting conversations next. I’m going to push us to be really specific about what people are literally doing with AI. What does that mean?

Michael Horn: Yeah, and the conversation with Laurence, I think opened us up to that because it started to talk about very specific use cases. It occurs to me this problem has always existed in education since I’ve been in the field. Right. That we talk past each other or I remember, you know, there’s project based learning adherence to like an extreme degree. And they’ll say everything ought to be learned through projects. And then you say, well, okay, the kid learning to read though, in first grade, they’re like, oh no, no, no, that kid should get phonics and direct instruction and blah, blah, blah. And you’re like, okay, so there’s nuance, but we have to break apart, novice versus expert.

What’s the topic? What’s the goal? Right. Like, and so skill versus knowledge, as you know, that gets conflated, conflated all the time. And we don’t have precision. And so I think it’s a good plea you’re making, which is just like, let’s be more specific. What’s the objective? What’s the learner coming in with if that’s the level at which we’re talking?

Diane Tavenner: OK, all right.

Michael Horn: Where are we going next?

Diane Tavenner: To one of my favorite topics, which is school models.

Michael Horn: Okay. Yep.

Diane Tavenner: So I’ve been reflecting on a number of conversations. I’ve been having a bunch of stuff. I’ve been reading dialogue that I know that’s happening. There’s a variety of people trying to think about the future and what it looks like with AI. And there’s. I think none of these are set yet. They’re all kind of rough, but they’re starting to fall into this pattern of where people are talking about three different models, if you will, of schools. And I want to come back to what is a model in a moment.

But, but this idea that there’s. I’m going to call it, I think generally people agree that we have an industrial model school at this point. And we have had for quite a long time. We’ve talked about this ad nauseam and that. So let’s call model one sort of current industrial model. And when with the emergence of AI, model one sort of stays industrial model, but you know, AI gets used in some of the ways we just talked about. You know, like there’s, you keep all your existing structures of grade levels and schedules and teaching roles, but you have an AI enabled tools where you’re helping it to grade student work or you’re using it to lesson plan and you know, instructionally plan. You’re, you’re doing some adaptive practice and feedback.

You know, I think the stuff that people probably are more familiar with because they see it. So, that’s kind of Model 1 still in the industrial world. I’m going to jump to model three before I talk about two, because two confuses me a little bit. So model three, let’s call that native AI education. I think most people I know would argue that this has not been invented yet. It doesn’t exist yet as a model.

Michael Horn: Do we know what it means?

Diane Tavenner: I think that the way people have started to describe it I’m not sure that I agree with. And so here’s where I am on this one, which is I don’t think we know what it looks like yet. I think we’re failing in our imagination right now of what’s possible. I think it’s a moment to go into the proverbial garage and do some real designing. Yeah, but let’s call that the post industrial model. I don’t like to call it the AI model because of the definitional problems we just said, but let’s just call it whatever the next school model, like the full model would be.

Michael Horn: OK.

Diane Tavenner: So then there’s two, model two and this one gets kind of squeezed in the middle. I think some people are calling it AI integrated education. Okay. And basically the, the emerging definition I’ve heard is that it’s where you sort of modify selected structures where the sort of benefits justify the disruption. So for example, you know, you have much more interdisciplinary curriculum. You have competency based progression in certain places, you have flexibility in existing schedules in blocks or things like that. You might start seeing some of the time out of the building or, but you’re still sort of, I would argue, existing in the industrial model kind of box, if you will. Okay, but you’re, but you’re using an integrated AI approach to kind of hack some of those things.

OK, yeah, so let me pause there before I start asking my question. See if like those resonate if you’ve heard about them, you know.

Michael Horn: Yeah, no, I haven’t thought about it this way. So I’m noodling as you’re saying it, this is real time. I guess I’m curious. Like models, like a Montessori, like a classical education or the new versions of classical education we’re seeing in microschools or you know, I don’t think Waldorf fits into your typology but like where would you slot. Like those are models too.

Diane Tavenner: They are.

Michael Horn: How do they slot into the schematic?

Diane Tavenner: Yeah. Well, let’s just take Montessori as an example. Right. So in some ways it’s still industrial. Most Montessori schools still exist Monday through Friday, kind of between 8 to 3 ish. They still have a teacher, you know, one to kind-of-many class. There’s, you know, they’ve sort of released or relaxed age grade bands, although I think society kind of imposes them on them. So you, you know there’s some sort of gravitational.

Michael Horn: I mean, you know my frustrations.

Diane Tavenner: I do know your frustrations. So I still think Montessori, maybe Montessori would be kind of a two.

Competency-Based Learning

Michael Horn: That’s what I was wondering is trying, it’s like it’s not AI, not AI enabled, but it uses the technology of the 1910s or whatever it was to have broken out of these certain structures. And so it’s a very competency based math sequence. Very competency based on the learning to read part of it and probably less so on everything else is your point. And there’s still some sort of, you were born in the year of the Scorpion and whatever it is, and therefore you’re going to learn this on this date with everyone else sort of element to it, I think is what you’re saying.

Diane Tavenner: I think that’s right. And, and one of the reasons I wanted to talk to you about this kind of framing is I’ve been trying to think about what sits in the model two category. Okay. I mean it feels very easy for me to identify, you know, almost every school as a Model 1 and many of them are starting to bring in these like AI tools if you will.

Michael Horn: Yeah.

Diane Tavenner: But they’re still clearly industrial models. It’s pretty easy for me to say I don’t think we’ve seen a model 3 yet with the infusion of AI. And then I think about like for example, what we did at Summit and Summit learning.

Michael Horn: Yeah.

Diane Tavenner: I think at the high school level that might be a model 2 without AI yet.

Michael Horn: Right.

Diane Tavenner: Where again we were sort of pushing the boundaries of that industrial framework of a model to try to, you know, reimagine or re-engineer portions or parts of what was happening with expeditions, for example, what kind of breaks the traditional five period, six period day, but all but doesn’t really break the calendar, if you will, or the, you know, eight to three kind of situation. So what do you think about that?

Michael Horn: That’s interesting. So I know we could probably geek out all day and create a taxonomy. So I won’t do that to our listeners, but I am thinking like you’ve seen almost different shots of goal, like. So I think of Florida Virtual School as an example. And I’m reading Julie Young’s draft. I’m not sure I’m supposed to say this, but draft memoir right now. And it breaks certain elements of that, but it’s still course based.

Diane Tavenner: Right, right. There you go.

Michael Horn: So the two things are interesting. And then I start to wonder. Everyone’s talking about Alpha Schools. We’re gonna have an episode on it, so stay tuned. Maybe we don’t get into it here, but, but things like that, where does that slot into your framework? Or I think about Acton Academy, probably falls into two is my guess. And so this is, I guess, what I’m trying to start to sort through as you, as you frame this.

Diane Tavenner: It’s why I wanted to bring it up today because we are about to shift to start talking with people who are either trying to redesign whole models or portions of it. And I think it will be helpful for us, for me for sure, to have this kind of framing in my mind.

Michael Horn: So you can say, pull it back. So we’re talking with an entrepreneur. Okay. You’re working in number one context. You’re working in two, three, maybe the frontier there.

Diane Tavenner: Exactly.

Michael Horn: OK.

AI Tools

Diane Tavenner: And I think there’s a couple of reasons why this is important. The first is back to that, talking past and over each other. One of the things I noticed is there are a lot of people who are gravitating to sort of the AI, you know, enabled tools that will definitely improve, you know, Model one industrial model, if you will. And they’re very passionate about that. They have really strong arguments about, like, there’s kids in schools today who need things to be better. And so we should be, you know, deploying these tools as best we can to do that. Then there’s a whole other group of people, smaller, who are like obsessing about designing Model three, a post industrial model. I don’t think anyone who’s been listening will be confused about where my kind of passions and interests lie.

So I’m definitely, you know, my attention goes to this question, and this, my energy is in that direction. And I really caught myself because I can be dismissive of that first group. And I think that is really problematic for me to do that because I. There. Well, here’s my question.

Michael Horn: Yeah.

Diane Tavenner: Do you think if those models are true in the way we’ve sort of laid them out, is the theory of action or change that you progress from 1 to 2 to 3? Because some people believe that.

Michael Horn: I strongly don’t think so.

Diane Tavenner: I don’t either. Okay, good. Say more because you’re the expert.

Michael Horn: Yeah, no, well, so. So my energy is also in three, as you know. And no one listening will be confused about that. But I think it is prudent from a systems perspective, like thinking about the country, that 80% of the dollars in energy are going into the number one. I think that is from a like sound strategy perspective. Makes a ton of sense. Right. It’s where most of the students are.

It’s like classic sustaining innovation. If I’m running a company and I see the new thing coming that I think is going to upset the apple cart, I don’t push stop on what we’re doing today.

Diane Tavenner: Right.

Michael Horn: I start to test and learn what we talked about on the fringes. And then like, I start to move things out there. Okay. So that’s where I go to the statement that I don’t see any cases where number one morphs into number three or we learn stuff from number three. And I had a guest in the class say, how do we pull it back into number one? I’ve never seen that work. You’ve never seen that number three replaces number one

Diane Tavenner: So then it has to be effectively designed from scratch, grown from scratch. It’s not, you know, evolving. No. Okay. Well, some people think it’s gonna.

Michael Horn: No, I know. And I just, I. And I think it’s totally rational to be putting bets and have a portfolio strategy that are in all three buckets. And I think you can learn lessons between them. Absolutely right. I mean, we know a lot about cognitive science from number one. We also don’t know a lot, I think, because. Take growth mindset, for example.

Right. My read of the literature is incredibly powerful. And if anything in the environment undermines the message of growth mindset, it pulls the kid back into the fixed mindset view and undermines all of that intervention. And basically every structure in number one does that.

Diane Tavenner: Right.

Michael Horn: So we can have our lesson on growth mindset. I don’t think that’s the best way to do it. But like we can have our lesson on growth mindset. We might see a temporary bump on some sort of assessment and then like immediately you get the C grade in the class and you’ve been labeled because you can’t take the feedback and do anything with it. You’re not even reading the feedback and you no longer think that.

Diane Tavenner: Yeah, well, and this is the point of growth mindset not being permanent. It’s not. You don’t either have one or you don’t.

Michael Horn: Right.

Diane Tavenner: It’s a continuous state that you’re in and you can fluctuate from in and out of that state regularly. Okay, so. Well, that’s an interesting conversation to have with folks who believe that the theory of change is that progression versus what we just.

Michael Horn: And I guess stay with it one more second because I remember when we came out with Disrupting Class, a lot of people would push us and say, well, we’re talking about systems change. What are you talking about? And I think we were talking about systems change too. But my theory of system change is system replacement.

Diane Tavenner: Well, there you go.

Michael Horn: And I think it’s really hard in the US for all the reasons we know. And one of the reasons I’m in some ways more optimistic than I have been is I actually see a path for that change, that replace or disruption of systems that I haven’t seen because.

Diane Tavenner: The technology is so.

Michael Horn: Well, and the ESA policies.

Diane Tavenner: Oh, and ESAs.

Customized Education Choices Rising

Michael Horn: Right. And so we see a level of entrepreneurship, a choice and I would argue now a family increasingly, if you’re in Arizona, Florida, Arkansas, wherever. It’s not just like the free public school or I pay money, it’s like, oh, if I just default to the free public school, I’m actually foregoing 8 to 12, $13,000 that I could be spending on my kids education in the way that’s customized for what they need and what they have shown interest in, et cetera, et cetera. That’s like a very different decision set now where all of a sudden it’s actually expensive to default to the free.

Diane Tavenner: Well, and to your point, it might take a little bit of time, but it really changes people’s, you know, mindsets around everything.

Michael Horn: And I was shocked. I. I have to look deeper into this. But Ron Mattis at Step up for Students in Florida sent me this report they did. He said the number of learners in Florida who are now doing a la carte learning. So not they don’t have a primary school five days a week. It’s a billion dollar market is going through that and I was like, I have to like sit with that.

Right. Still. And I haven’t fully digested it because that’s, that seems like a lot. But he, but it basically, if that’s true, over the course of a decade or so, whatever the choice landscape in Florida has been, people went from, okay, I have education, savings accounts, I choose a school.

Diane Tavenner: Right.

Michael Horn: To your point, with technology and a lot of entrepreneurship and a change in the landscape, to all of a sudden saying I can unbundle and do a whole set of things with this, that’s a, that’s faster than I would have expected.

Diane Tavenner: That is faster. Oh, I’d be so curious.

Michael Horn: I want to dig in all sorts of things now.

Diane Tavenner: Let’s do that at some point. Well, and what it suggests is that individual families are essentially crafting their own personal model. Now is it AI native?

Michael Horn: Probably not.

Diane Tavenner: Probably not yet. But I bet they’re starting to use some of, you know, the AI enabled tools as part of that. Yeah.

Michael Horn: And they’re probably making also some of these trade offs in terms of like when is it analog because they control the home environment. When is AI a tool to create something? They’re probably making a bunch of these nuanced choices on the ground that like you couldn’t dictate from a central planning curriculum standards perspective.

Diane Tavenner: Right. Although that might be a feature of whatever the new Model 3 is. I mean, my hope is that it is that it is personalized to that degree within the context.

Michael Horn: Yeah, great point.

Diane Tavenner: Yeah.

Michael Horn: And so now we’ve just blown both of our minds.

Diane Tavenner: I want to go back to Model 2 for a minute because I had this really fascinating conversation with your, you know, former colleague and collaborator Julia Freeland Fisher. And she said, huh, I wonder if this model two is akin to what happened when the steam powered ship was sort of invented and there was this period of time where the new steam powered ships had to be outfitted with sails because the new technology was so unreliable. And she suggested that maybe model two was that. And what the interesting point she made is she said those were the most expensive models because you had to have both technologies on them. And this hybrid version is really expensive. So I, what do you think of that?

Michael Horn: 100%. I agree. I, I hadn’t framed it immediately into that typology, but that’s almost every industry, when you see disruption, you see the old players take the new technology, right. Like there’s sort of a line, oh, they ignore the new technology. Not true. They layer it on the existing structure. Right. And the sailing ships are the perfect example.

I think the first sail ships to navigate the US was like 1819 or something like. Or 1803 and then 1819, the first transatlantic ship, the USS Savannah. And they had sales and they had steam bolted on. And I think only I’m going to get the numbers wrong but like 80 hours out of the 600 or whatever it took to cross were powered by steam. Basically every time that wind went the wrong way, they fired it up and kept going. Right. And so it’s a classic sustaining innovation on the old paradigm.

Diane Tavenner: OK. But it’s still. Those models do not get us to model 3.

Michael Horn: They don’t. Yeah. It’s, you know, the story is that it was a 100 year disruption.

Diane Tavenner: Yeah.

Michael Horn: Where still ultimately the steamship native companies, shipbuilders ultimately upended the sail ship. And it was around 1900 I think.

Diane Tavenner: And it’s a different model ship.

Michael Horn: It’s a completely different model. Right. You don’t have the same components. You can do things differently in terms of construction because you’re not outfitting around an aerodynamic sail. Right. Like a totally different set of things you can do. So.

Diane Tavenner: OK, I have a question. Now, you said you felt comfortable with the field sort of spending 80% of its resources on Model 1 improvements, leveraging AI. Is there a risk that we over invest in Model 1 and undermine the emergence of Model 3 because we kind of keep this old industrial model going, breathe new life into it and there isn’t a sense of urgency around model three creating three. Yeah.

Michael Horn: Two thoughts. Clay used to always say this. The best experts in a field, like you’re a very strange anomaly. The best, deepest experts in a field are almost always consumed with the toughest problems in, we’re going to call it Model 1 at the edge of the existing paradigm.

Diane Tavenner: Interesting.

Innovation Beyond Traditional Expertise

Michael Horn: And it’s these people who are almost less expert in some way or for some reason have taken their expertise and brought it out that invent the future. But like it’s very hard to persuade the people who are dealing with the hardest, most intractable problems in the first paradigm to be persuaded to design out there. It’s why I think like, you know, when you and I met for the first time and you actually liked Disrupting Class, that was like a bit of a revelation because like we couldn’t get all these people to sort of like actually engage with it. Right. And so. Or, or they thought they were engaging with it but missing the point. Right. And so I don’t know where that goes.

Except, like, in some ways, I’m not surprised that that’s the current moment we’re in. I think the danger is if those individuals then block off our avenues to pursue three, I’m okay with them being consumed with one. I think it’s great. There are a lot of underserved kids there that need better education. And I think if they use that as a justification to block off three, through policy change, through blocking entrepreneurship, through blocking families making these choices, that would be deeply concerning.

Diane Tavenner: So glad we’re having this conversation. There’s two places where I have fear about that and.

Michael Horn: Well, you’ve lived it.

Diane Tavenner: I did, yes. Continue to, it’s my life. And there’s two places that I just want to raise here. And at the risk of how, you know, these are sort of controversial and they’re very nuanced. I often am misunderstood, so I don’t talk about them out loud very often.

Michael Horn: But thanks for doing it here.

Diane Tavenner: Here we go. So the first is the big assessment and accountability system. And you know that my belief is that that structure, which is well intended and people are deeply passionate and invested in making sure that we have real data and know what’s going on. I just spent time with a parent advocate who’s like, those tests are the only receipts we have of what’s happening with our kids. Right.

Michael Horn: There’s a great article recently around how people are just shocked because the tests have gone away and they’ve been relying on grades, which are even more worthless measures. Yeah.

Diane Tavenner: Right. And so there’s a lot of energy going to. How do we bring those back? How do we reestablish them? And, and my belief is, and my lived experience is, and most people don’t like hearing this, who believe in them, is that the existence of that accountability structure, I truly believed deeply dampened innovation and the move towards now would be model three. And I’m super disinterested in hearing about waivers and all these things. And. No, it really has an impact.

Michael Horn: Let’s get into how, because I’ve moved toward you a lot on this one. But in one standpoint, it’s like, well, it’s just focused on outcomes, frees up the inputs. You get there however you want. Like, how does it actually restrict the innovation? And is that a. And why is that a bad thing?

Diane Tavenner: Yeah, I think that it’s. Well, let me share a quote that I hear very often.

Michael Horn: OK.

Diane Tavenner: Which is, look, I’m not opposed to measuring different things but we don’t have those measurements yet. And so until we do, give me reading and math. And you know, I’m going to judge schools on reading and math, basically, which is effectively what we test in this country. And first of all, I think the problem is we actually do have those other assessments and they are crowded out. They aren’t accepted as, you know, mainstream, valid, reliable. No one is moving towards adopting them because it’s all about reading and math. And so I think it is really, you know, you measure what you value, you value what you measure. And there isn’t.

The system is not saying, no, completely unacceptable that we’re literally measuring our entire system on these two Important. Yeah, very important. Please do not misinterpret me. People always accuse you don’t want kids to read.

Michael Horn: Well, by the way. But I’m curious what you think of this. This is a classic case where I think defining the age span is important because I am strongly in favor of not losing the measures to families. Note how I said it, by the way, but measures to families on can your kid learn how to read, get those skills through, hopefully third grade. But you know, I’m. I’m actually willing to live with some variants in the age.

Michael Horn: All the reading tests after that are really knowledge tests.

Diane Tavenner: Correct.

Michael Horn: And so I would be much more comfortable, frankly, with every school picking like the. Or student, hey, you just did a deep dive on X. Go show your competency in X. I think that’d be a much more interesting. It’d be super jagged, students showing all sorts of deep dives on a variety of things and so forth. I think that’d be way more interesting. Math, I think, is a little different.

Diane Tavenner: Yes.

Michael Horn: And I don’t know where it stops. Probably around algebra, but. Yeah.

Diane Tavenner: Well, you just said a key point that really bothers me the most, which is the accountability and testing framework that we’ve had in this country is not about informing parents. And it’s not actionable data. It’s not timely data. It’s not what we would call that feedback, honest, actual timely data.

Michael Horn: No. And in fact, it’s negative reinforcement cycles.

Diane Tavenner: Exactly. And so let’s just take reading as an example. The oldest assessment technology is a reading record. I mean, schools could literally choose to assess every single kid that way and put resources towards that. It might not even be that many more minutes than they already spend on stage.

Michael Horn: By the way, AI can really do that now.

Diane Tavenner: Well, and I’m not even getting into鈥

Michael Horn: What technology can do.

Diane Tavenner: So why, why these old assessments. Right. And so anyway, I’m deeply concerned that there’s so much good intent there and so much potential.

Michael Horn: But you’re arguing that it’s crowding out a ton of these other measures that either are there or could be developed more robustly.

Diane Tavenner: Right. And in the same way that I can be sort of dismissive of efforts around Model one, I think a lot of folks focused on today and now in kids in school are very hand wavy and very dismissive of the impact this has on the potential for innovation. So I’m, you know,

Michael Horn: Super interesting. Yeah. Okay.

Diane Tavenner: The second one is

Michael Horn: You’re taking a breath, you’re giving me a look for those that can’t. We’re not a video this time.

Diane Tavenner: No, we’re not.

Michael Horn: Yeah, go ahead. Where are you going?

Diane Tavenner: Special education.

Michael Horn: Oh, okay.

Diane Tavenner: And I want to say up front, my belief is, are we, by the.

Michael Horn: Are we at the 50th anniversary of special ed at the IDA, the federal level?

Diane Tavenner: We might be.

Michael Horn: I think we are, yeah.

Reimagining Education for Every Child

Diane Tavenner: Okay. Yeah. The intention is right. So many amazing people working on behalf of kids here and most people who’ve spent so much time in schools like I have with families, you know, it’s a system that is about compliance more than it is about children, is. I don’t believe it gets young people what they need. And I think that has a really challenging impact on our ability to educate all of our children. And this is one of, in my view, one of the biggest promises of a post industrial model is that truly every child gets a personalized education.

Michael Horn: Because everyone’s now getting an ILP as a good. Exactly right.

Diane Tavenner: Exactly, exactly. And my worry is that in both the assessment case and special education, that new models, model threes, will be judged and held accountable to the current accountability systems and the law, which completely compromises their ability to design completely new and better approaches.

Michael Horn: Yeah. And my colleague, or I guess former colleague at the Christensen Institute, Tom Arnett, has written a lot about this one, about how when you apply the standards to the new system that were for the old, you hamstring and often stunt it completely. I think that’s very fair. My pushback historically has been. Yeah, but the existing system is all input driven and then it has outcomes layered over. If we strip out the inputs, which by the way, people are trying to put back on for the Attempts at Model 3 right now as well. Right. Like accreditation, really.

Michael Horn: I think you’re pointing out even though these output measures, I don’t even think they’re outcome measures, but output measures have been layered on, I do see where they could pull model three back in some unfortunate ways for design. And I think those are to me, that’s where the fears are really. It’s. It’s less the effort question in dollars and more the are we hamstringing it to actually just look like the existing thing we already have in slightly modified?

Diane Tavenner: Right. I’ve certainly learned from you the most, you know, how disruption happens is that people take it outside of the existing system. They have different expectations. You know, they look at it fundamentally differently. And so maybe this is the importance of ESAs. And I mean, as a person deeply invested in public schools in America, I would be very sad if we’re going to push all the innovation out into the private sector because we can’t welcome it into the public sector.

Michael Horn: Yeah.

Diane Tavenner: And maybe that’s what we’re gonna see.

Michael Horn: Yeah. I’ve always felt like the public officials ought to be responsible not for the institutions, but for the constituents. Right. And so the models may change. And by the way, look, in Florida, you have districts now launching their own microschools and creating certain services a la carte. And like, like they’re spinning off autonomously. Let’s see where it goes.

Michael Horn: Right. I mean, I don’t think we know the final thing yet. And the conversation I was having with one of my students yesterday as well was, you know, no one’s cracked yet, I think, in these. So they’re not really model three attempts because they’re not AI native. But let’s just call like this sort of emerging ecosystem. We haven’t seen a lot of high school models.

Diane Tavenner: Nope.

Michael Horn: And I think part of it is because disruption starts as primitive, able to solve simple problems, not the most complex. Identity formation becomes much more important in high school. Right. And all these rituals that we may roll our eyes at around Friday Night Lights or prom or whatever else, they’re part of this identity formation and asking who am I in relation to others? And these small, you know, I think, you know, Tyler Thigpen, Forest School, Acton Academy, he’s done a good job of creating rituals, but most high school attempts have not yet built that. And so I kind of wonder, is the upmarket, if you will, solving for all of those things with very different traditions that don’t look like Friday Night Lights, but are actually more meaningful for the current time around identity formation?

Diane Tavenner: Totally. Well, and now you’re getting at the heart of what I’m trying to contribute to with Futre, which is how do we support some of that positive identity formation and search for who I am and the life I want to lead, both in the digital world and then connect that to real world experience.

Michael Horn: Well, I think it’s interesting though, that your market is the traditional industrial Model one, largely. And so I’m, I mean, I’m curious how you think about that.

Diane Tavenner: I’m living in a bipolar world. Yeah,

Michael Horn: 鈥 yeah, yeah. Okay, okay, okay, okay. Well, I. You’ve built it with a modular interface, as I understand. Right. So it can exist in both, I think is part of your answer. And I, I imagine you’d say a native model 3 would actually answer a lot of the future questions as part of the design of the model itself.

Building Towards Model 3 Framework

Diane Tavenner: I think so. And I do think, you know, yes, I hope that what we’re building can live in both worlds and is one of, you know, the early ideas or components of what a Model 3 will look like. And I certainly will be engaging with folks on pushing that area, so hopefully we’ll talk more about that. I think where this is all leading for me is the next part of our season. So we’re gonna talk to a bunch of different people and I’m gonna be really. I’m gonna be in the back of my mind thinking, all right, well, where do you sit in this imperfect framework, this developing frame? But, but sort of, where is your effort sitting in that? Are you literally a whole school model? Are you an element to a model? Are you, you know, an AI enabled tool? Are you really trying to push the boundaries of designing for Model 3? Are you an interesting model two? And what do those look like? So.

Michael Horn: Yeah, well, and that’ll be interesting because I think as I look at the guests ahead, we have a lot of folks in Model 1 who are working with that system. And I’ve been wondering, given the hypothesis that we have fleshed out over the last couple of seasons of AI, like how that fits with the things that we’re interested in. And this is good. I think we’ve given a good framework on the importance, frankly, of all three of those elements and the work that they need to be doing and the dangers of crossing over perhaps, assumptions from the worlds across the different models.

Diane Tavenner: Perhaps. Awesome.

Michael Horn: This got interesting. A little spicy.

Diane Tavenner: A little bit spicy. Well, super useful for me and helpful for me to think about things. Any last things on your mind?

Michael Horn: I have one last thing. Hopefully we won’t get cut out of the studio, which is, I thought a lot about what is the world into which people are going and how does that map back to what is still core and what is not core and so forth. And I just want to float an idea by you and have you attack it.

Diane Tavenner: Great.

Michael Horn: The reflection I’ve had is we know there’s a considerable amount of cognitive science that suggests we learn best through story stories, narrative arc, and we don’t actually deliver most learning or offer learning opportunities in that. And so I guess I’ve been wondering as we think through, you know, we had the back and forth of do they need to memorize state capitals? And we both said, probably not. But I do think they should know that there’s a thing as a state capital. And so my thought about it is almost like Montessori has the I’m gonna mess it up, the great lessons or something like that. Right. And it’s a narrative arc. But I almost can imagine narrative interactive arcs where you’re like sort of, okay, how did the country’s governance evolve over time? And these thin layers that would build a lot of common reservoir of knowledge. And I think I’m largely talking K5, maybe K8, that that could be a big part.

And like in, in the various disciplines, if you will. Right. Civics, a variety of deep dives in history, et cetera, et cetera, science. I think it should be active. I think it should be multimodal. It’s not clear to me. It’s the teacher delivering the story.

Diane Tavenner: Say what you mean by multimodal, because a lot of people are using that term and I don’t think many people know what it means.

Michael Horn: Yeah, yeah. So I guess I see it as being like, you can imagine it being some of these lessons being video based through an AI. You can imagine an auditory sound. Right. You can imagine interactive where you’re actually answering questions both verbally and written as you’re working through something, you can imagine, like the state capitol one. So you have a lesson around how did state capitals evolve in state government?

Diane Tavenner: I mean, it could be VR, like literally immersive.

Michael Horn: Right, Exactly. And then you could almost imagine then like you pop out and like, my kids still draw maps. I actually think that’s really valuable. But I don’t think that they then have to drill memorizing every feature, but they don’t know what question to ask Gemini or ChatGPT without like sort of that thin knowledge base. Right. And that’s sort of where I’m wondering if you’re. We evolved to something like that that recognizes the importance of some knowledge.

Diane Tavenner: Yes.

Michael Horn: We could have mastery assessments where we thought it was really important.

Diane Tavenner: Yes.

Michael Horn: We don’t have to have it for everything, frankly, it’s just exposure is probably good enough, especially if it’s interactive. I don’t know. What do you think of that idea? What are the flaws? And sorry. And then creating the space then for like, hey, you’re interested in this? Okay, here’s your project. Go deep, right? Like, and that’s where the deep explorations of learning how to learn and developing the skills would really be.

Diane Tavenner: This feels very fun to me to think about this. And these are the types of thoughts I’m constantly playing with and that I think should influence the design of Model 3. I love that you brought up this idea of memorizing the 50 state capitals because I think maybe we are misunderstood when we both say we. We don’t necessarily think kids should memorize the 50 capitals. That’s not because we don’t love America, believe in America, think that they shouldn’t. I think what we’re both more interested in is literally having them have like a deep story about each of the capitals and really internalizing. I mean, I will tell you, we get to travel a lot. Do you, do you like how I frame that? We get to travel a lot.

And when I travel, I love this country so much. It’s so fascinating. There’s so much.

Michael Horn: It’s so much fun to dive in, right? And take the, like you’re in, you’re in wherever and you go to the Alamo or whatever it is. And like, it’s so much fun.

Deep Learning Over Memorization

Diane Tavenner: It’s so curiosity driven. And so what if young kids didn’t memorize 50 capitals? But what if they went deep on a couple of them, like in a story based way, in an immersive way, and they got the idea of state capitals and what they mean and the importance. They got very cool stories about, you know, a few of them at that age. And then they got a lifetime of like, oh, I could, there’s so many more I can learn. And there’s so many interesting stories about them. And they’re not just a name on the page and, you know, on a flat map, but they’re real places that have real significance and they’re different from each other and because they have such access to knowledge now, if they really need to go look it up, they can go look it up..

Michael Horn: They can do the deep dive. Right? And I think the knowledge conversation, I’m a big believer in the importance of a fundamental knowledge base and the depth at which those occur. I think we don’t have a nuanced conversation around.

Diane Tavenner: Right. And I also am okay with it, I’m gonna call it the Swiss cheese of knowledge.

Michael Horn: Yeah, so am I.

Diane Tavenner: That you don’t have. Every fourth grader in America does not need to know the same facts.

Michael Horn: Yeah.

Diane Tavenner: It’s okay if we learn them at different points and different times and that there’s, you know, sort of regional differences around that. I’m much more committed to everyone having a common set of really important skills, at least at a baseline level. And then ideally spike lots of people spiking in the different skills in different places because we need all those.

Michael Horn: But when you say the skills, you’re thinking that it’s been developed through them working in different domains and areas repeatedly in deep dives. Right. And so

Diane Tavenner: Because you need content to practice skills.

Michael Horn: Exactly right. And you create that integration. I think a lot of times in school it goes the other way where like, oh, we learn how to think critically about what.

Diane Tavenner: Exactly.

Michael Horn: And so again, these crosswalks extremes, I think are right. Yeah. Anyway,

Diane Tavenner: Yeah. And so, you know, and this is why we both like a project based environment because it’s the integration of the two and there’s such power in what AI can do now where you can really do personalized learning on, in the content to bring to those, you know, engaging, collaborative, communal type, project based experiences. So I mean, I love what you’re saying in the direction you’re going. It’s very nuanced as you know, it’s.

Michael Horn: We should have some more fun later on and. But I just wanted to float the general idea because I had this moment in our conversation with Alex where I was like, at what level are we thinking about difference and what does stay the same? And I think part of my reflection has been there’s actually a fair amount that stays the same, but how we’ve done it probably changes pretty radically.

Diane Tavenner: Indeed.

We’ve been recording pretty frequently and I know we’re both feeling a little stretched on thinking about new books and things we’re reading. We’ve maybe exhausted our list so I thought maybe we’d take a break from that list only today. Thank you. And replace it with this will make this episode a little less evergreen. But for those who are listening, we’re actually recording this right before the week of Thanksgiving, and I thought I would end with some gratitude.

Michael Horn: Oh, I like it.

Diane Tavenner: So one of the fun moments of yesterday’s engagement with your class and then the office hours afterwards was there for so many young, amazing people who so many of their questions were very personal yesterday about, you know, how to be a mom and lead and how mentorship and all of. And, you know, my relationship with my husband over the years. And I’m so. I’m appreciative that they were thinking about that. And one of the things that came up was just our friendship. And I think you know this. But I am so grateful for our friendship, and it is truly one of, for me, the big, you know, if there are any highlights coming out of COVID the fact that we decided to do this, it gives us time together. It’s just so much fun, and I’m so grateful.

Michael Horn: You know, I’m a crier, so I’m trying not to right now. Thank you. I feel the same way. And it’s one of those things where I feel like, how lucky am I that we get to have this conversation? Even though I moved away from the Bay Area over a decade ago, which is wild, 12 years, but. Yeah. And I think it’s. So when this comes out, it’ll be after the new year, I think, and so forth. But I always tell my students, because, as you saw, like, 55 or so percent are not from the U.S.

I say take the time because how cool is it to have a day when you get to say thanks? So thank you as well. Yeah. And thank you all for joining us through the sentimental moment. But also on Class Disrupted. And just keep your questions and curiosity coming. We suspect there’ll be things you disagree with that we said here, and we can’t wait to learn from you. So thank you, as always, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
AI Trailblazer Google Doesn鈥檛 Want Schools to 鈥楤ypass the Human鈥 /article/ai-trailblazer-google-doesnt-want-schools-to-bypass-the-human/ Mon, 02 Feb 2026 11:30:00 +0000 /?post_type=article&p=1027968 In 1999, the Indian computer scientist and educational theorist Sugata Mitra created a small, if audacious, learning experiment: He and colleagues at the National Institute of Information Technology in a street-level wall of their New Delhi office building and mounted an Internet-connected personal computer, usable by anyone who passed by. No instructions, no suggestions, no lesson plans. Just access.

Within hours, Mitra would later write, children from a nearby slum appeared 鈥渁nd glued themselves to the computer.鈥 They learned how to use the mouse, download games and music, play videos and surf the Web, all by teaching themselves.

The experiment in what Mitra called 鈥渕inimally invasive education鈥 was . It became in the ed tech world, evidence that children simply need access to tools to be successful.

Dr Sugata Mitra in front of his ‘hole in the wall’ experiment.

But don鈥檛 mention Mitra too enthusiastically to Ben Gomes, the computer scientist who co-leads Google鈥檚 education efforts. While the 鈥渉ole in the wall鈥 experiment is a hopeful, charming story, he鈥檇 say, it鈥檚 missing a key element: teachers.

People are fundamental in the learning process. People learn from other people, and people learn because of other people.

Ben Gomes, Google

鈥淲e are paying attention to pedagogy, and we’re working with the teachers,鈥 he said. 鈥淲e’re not saying we just want a thousand flowers to bloom randomly.鈥

As AI becomes more ubiquitous in schools, Gomes maintains that Google has a duty to train teachers not just how to use its products but also how to help them move students from taking shortcuts to using AI for deeper, often independent learning.

That strategy could dull longstanding complaints that ed tech more broadly is focused on replacing teachers with tech tools that don鈥檛 .

鈥淚t’s a belief backed by science, to a large extent, that people are fundamental in the learning process,鈥 Gomes said, 鈥渢hat people learn from other people, and people learn because of other people.鈥

Children certainly can and do learn independently, but deep conceptual understanding and literacy require guidance 鈥 especially now, nearly three decades after Mitra鈥檚 hole in the wall, with many developers looking for ways to replace teachers with AI.

鈥淭eachers are critical in this process,鈥 Gomes said. 鈥淲e don’t want to bypass the human.鈥

AI as 鈥榯hought partner鈥

In a recent , Gomes and a handful of colleagues explored how AI could reverse declining global learning, largely through supporting teachers and turbocharging personalization. In mid-January, Google said it was on AI in the classroom, offering its AI-driven Gemini app to more educators and students for free, making tools such as available and partnering with Khan Academy to power a writing coach tool.

The search giant has put a former NASA trainer in charge of much of the effort. Julia Wilkowski, a neuroscientist, has also taught sixth-grade math and science. She began her career at an outdoor environmental school, where she recalled hiking trips in which she鈥檇 ask students to figure out the velocity of a stream using only an orange, a length of string and a stopwatch.

Wilkowski now spends 鈥減retty much 100% of my time鈥 focused on ensuring that Google鈥檚 AI for students rests on sound learning science.

In interviews over the past few weeks, Gomes and Wilkowski spoke openly about their work, in several instances admitting that much of it amounts to helping teachers find ways to get students to stop outsourcing their thinking.

鈥淭eachers have the opportunity to teach their students how to use these tools ethically and effectively that don’t bypass those critical thinking skills,鈥 said Wilkowski.

As an example, she said, she has worked with English teachers to help them instruct  students on how to use AI as 鈥渁 thought partner鈥 in essay writing, not as the writer itself.

These teachers, she said, have succeeded by breaking down essay writing into its component parts and openly discussing its goals. They use AI to help students brainstorm essay topics, refine thesis statements, help generate first drafts and offer feedback on them, giving students 鈥済uidance and guardrails鈥 without allowing them to turn in AI-written essays.

The work, stretching back a year and a half, 鈥渉as really informed my optimism about how AI can be used successfully,鈥 she said.

Guided learning

Both Wilkowski and Gomes spoke often of 鈥済uided learning,鈥 saying students learn best when they move beyond simple answers to develop their own ideas and think critically. To get them to do so, teachers must guide them with carefully designed questions.

There's no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.

Amanda Bickerstaff, AI for Education

Perhaps unsurprisingly, Google has for that, a section of Gemini that acts much as a private tutor or guide, offering students a taste of 鈥減roductive struggle鈥 that engages but also challenges them without offering answers (at least not immediately). Rather, it steers them to the answer through a series of questions.

Gomes said the principle is working its way into most of Google鈥檚 AI products, including a newer one called , which uses the technology to help students learn topics in interactive, more appealing ways most textbooks can鈥檛: as a text with quizzes, a narrated slideshow, an audio lesson and a 鈥渕ind map鈥 that lays out related ideas in connected graphics.

At its root, Gomes said, the dilemma over AI and cheating stems from motivation. 鈥淚f I look back at my own childhood, there are certainly cases where I was just interested in getting something done for tomorrow,鈥 he said. 鈥淎nd there are other cases where I was curious and I wanted to read more.鈥

The ratio between how much time students spend in one state vs. the other varies, he said, 鈥渂ut getting more people into the state where they are motivated, I think, is the goal.鈥

But Amanda Bickerstaff, co-founder and CEO of , a training and policy organization, said the reasons students turn to AI are 鈥渇ar more complicated than lack of motivation.鈥 

Students are dealing with 鈥減erfectionism, high-stakes assessments that prioritize grades, skill and language gaps,鈥 among other dilemmas. 鈥淔raming this primarily as a motivation issue oversimplifies what’s actually happening in classrooms.鈥

She said Google鈥檚 shift toward Socratic reasoning 鈥渟ounds promising, but there’s a fundamental problem: There’s no published research showing that GenAI chatbots have the pedagogical content knowledge to be effective Socratic tutors.鈥

The chatbots are 鈥渟ycophantic by nature,鈥 Bickerstaff said, offering answers and completing tasks even when not explicitly asked to. 鈥淭hat’s the opposite of productive struggle.鈥

And most young people, she said, don’t have sufficient AI literacy to use these tools strategically. 鈥淲ithout that foundation, chatbots become for schoolwork rather than a learning tool. You can’t solve that problem through interface design alone.鈥

More, better feedback

For her part, Wilkowski said much of the struggle over AI comes down to feedback: How much should students get, how often, and what should it look like?

Wilkowski said her daughter is in high school and was required to write an essay for a final exam in December. When Wilkowski spoke to 社区黑料 in early January, she said the essay still hadn鈥檛 been graded. 

鈥淚 would rather have AI-generated feedback,鈥 she said. 鈥淕ive the first draft, and then the teacher [can] review it, of course, before giving it to the students.鈥

Teachers have the opportunity to teach their students how to use these tools ethically and effectively that don't bypass those critical thinking skills.

Julia Wilkowski, Google

More broadly, she said, AI could soon change how students are assessed altogether, helping teachers move away from tools such as multiple-choice tests, whose problems are well-known in the testing world: They鈥檙e easy to create, administer and grade, and they鈥檙e reliable. But they also allow students to guess rather than show understanding, and they encourage students to learn by rote memorization rather than deeper engagement with material. 

Multiple-choice tests also can鈥檛 evaluate higher-order thinking skills, creativity, student writing or the ability to construct arguments. If AI can make essays or long-form questions or even projects easier to grade, wouldn鈥檛 that put the multiple-choice test out of business?

鈥淟et’s say you’re in physics class and you’re studying acceleration-versus-time graphs and you ride your bike home,鈥 Wilkowski said. 鈥淎n AI tool might pop up and say, 鈥楬ey, here’s your acceleration-versus-time graph of your bike ride home. What did you notice about your velocity? How did it change as you changed acceleration? Was there a hill that you had to overcome?鈥欌 

More relevant assignments and assessments, she said, could get students to think more critically, incorporating school into their real life in deeper ways. 鈥淚t goes back to the heart of what excited me as a teacher: those excited, hands-on lessons. I’m seeing a way that 鈥 AI can facilitate those in the future.鈥

AI for Education鈥檚 Bickerstaff said it’s encouraging to see Google working to create more 鈥渇it-for-purpose tools鈥 for student use. 

鈥淭he education sector desperately needs companies to move beyond general-purpose chatbots and build tools that actually support cognitive work rather than replace it,鈥 she said. 鈥淏ut there’s still a lot of work to do 鈥 and a lot of research that needs to happen 鈥 before we can know if these tools are effective learning guides.鈥

]]>
Opinion: How AI Is Helping NYC English Teachers Improve Middle School Reading and Writing /article/how-ai-is-helping-nyc-english-teachers-improve-middle-school-reading-and-writing/ Fri, 30 Jan 2026 11:30:00 +0000 /?post_type=article&p=1027894 Today’s students are on a high-speed trajectory toward an 鈥渋nnovative鈥 future 鈥 one in which artificial intelligence has equal potential to enhance or undermine their learning.

Teachers are rightly concerned that AI cheats and shortcuts will erode students鈥 independent thinking and that increased screen time will the social skills and human connection kids need more than ever in a technology-powered world.

As New York City superintendents, one in the Bronx and one in Brooklyn, we decided to lean into this moment and try to develop AI-powered teaching assistants that increase student thinking, foster human connection and complement effective teaching practice.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


As a guide for sorting through the many AI product pitches in our inboxes, we focused on NYC鈥檚 big goal of increasing reading achievement and decided to concentrate on improving our core English Language Arts classes. We didn鈥檛 want another supplemental solution 鈥 an extra intervention when core instruction fails to meet the needs of diverse learners. Instead, we wanted more students to receive the support and feedback they need during class, so fewer of them require additional help. 

Since the New York City Public Schools had already done a lot of to improve phonics instruction and foundational reading skills in the , we decided to focus on middle school, where rigor increases along with students’ struggles. We met with principals who wanted to be early adopters share our goals and an early demo. Eleven schools in the Bronx district and three in Brooklyn signed up.

We did not want the entire class to become tech-powered; rather, we targeted the AI toward the most challenging parts of the lessons, when students were doing close reading and writing. Teachers assign each student to a small group, and they all open their Chromebooks and log into , which takes the texts and questions from the curriculum, makes them interactive and provides more targeted support for students who need it. 

Students first collaborate with their partners, discussing their initial thinking about each question. Then, they type or speak their response into the AI. The technology confirms what the students understand through instant feedback and then pushes them to go deeper, often directing them back to a specific portion of text and asking a follow-up question that guides them from literal comprehension to inferences and author鈥檚 craft. As one student said, 鈥淚t鈥檚 like the handout is talking to me.鈥 

While all this is going on, teachers review a live dashboard that shows every student鈥檚 level of understanding of every question. If the teachers see students are struggling, they can provide immediate assistance to get them back on track.

After about 15 minutes of students working together with each other and the AI, the teachers push a button and the AI synthesizes the two biggest misconceptions in the class in real time, suggesting a discussion question to address each one (this was a 鈥渨ow鈥 moment for our teachers!). The teachers then lead a targeted class discussion, often with a lot more student participation than usual because the kids feel more confident after working with the AI and their partner.

Finally, all students complete an exit ticket, often a short written paragraph about the final question of the lesson. They again receive up to three rounds of real-time feedback on their work and revise their writing after each round. 

Based on 2025 New York State test results, classrooms that used these tools at least twice a week for the year doubled their rate of growth compared with the rest of their district. In in the Bronx, for example, those students saw growth of between 14 and 16 percentage points over the previous year, compared with a 7-point improvement overall.

While we are still learning, we hope the knowledge we gained will help other educators actively shape this next generation of AI-powered tools. Here’s some of what we learned.

First, it was important to ensure that our AI tools worked seamlessly with the high-quality instructional materials (HQIM) we had already adopted. As Heather Peske from the has highlighted, AI tools that instantly allow teachers to create lesson plans, change assessments or dial down the level of challenge risk undermining the quality and consistent learning progression on which HQIM curricula are built. 

Second, it was important to increase student collaboration, both in small groups and during full-class discussions. Most early AI products follow the old paradigm: Students put on headsets, look at a screen,and work silently on their own. No one knows the full complement of skills that young people will need in their AI-powered futures, but will be even more critical than it is today. 

Third, the biggest decisions we made were pedagogical, not technical. We wanted the AI not just to support students or save teachers time, but to help our educators be more effective. Our teachers helped design the 鈥渕isconceptions spotlight鈥 tool so they could see and address the biggest areas of student struggle. They also asked for a 鈥渉ighlight鈥 tool so they could celebrate strong student thinking and call out exemplary work for discussion when the learning is still fresh and relevant.

Fourth, the North Star of any improvement effort must be student outcomes. Based on the 2024 NAEP results, reading achievement nationwide is at its lowest level in 30 years. In adopting any AI tool, school and district leaders must clearly define their goals at the beginning of any partnership, and then rigorously evaluate the impact. The is leading a movement to better align incentives and ensure contracts are tied to clear measures of student impact.

The decisions school leaders make today will shape tomorrow鈥檚 outcomes. When educators both embrace the transformative power of AI and hold tight to the values and knowledge of effective instruction, every school can build the future all students deserve.

]]>
Why It鈥檚 Important for Young Children to Understand What鈥檚 Behind AI /zero2eight/why-its-important-for-young-children-to-understand-whats-behind-ai/ Thu, 29 Jan 2026 05:30:00 +0000 /?post_type=zero2eight&p=1027809 As the pace of product development for AI-powered toys accelerates, controversy 鈥 鈥 about the appropriateness of these products for young children have left many parents and educators tempted to tune out or opt out. But as kids interact with AI more regularly, it鈥檚 important to teach kids what鈥檚 actually behind AI and how to use it responsibly. 

A focused on computer science and artificial intelligence aims to teach young kids to build, program and prototype together. In essence, students build their own machine learning models, solving problems, inventing characters and telling stories connected to their interests. The program, designed by Lego Education to be used in K-8 classrooms, offers project-based experiences for kids to work on in small groups. The lessons use Lego bricks, and some are screen free, while others require access to a device, such as a laptop or tablet, so kids can access an app which has a 鈥渃oding canvas,鈥 with icon-based coding.

Kathy Hirsh-Pasek, professor of psychology at Temple University and a senior fellow at the Brookings Institution, commends Lego for using the science of playful learning to teach computer science. 鈥淲hen children learn to solve problems with hands-on materials,鈥 she states, 鈥渢hey are more likely to not only learn material but to be able to transfer what they have learned. In my experience, the Lego team has always worked with scientists to develop teaching tools that are aligned with the very best science on how children learn. It is one of the few companies committed to this way of doing business.鈥 (Hirsh-Pasek has collaborated with the Lego Foundation on other projects but did not take part in this initiative.)

In a significant departure from many other AI products, data from the children never leaves the computer. 鈥淎 really strong perspective that we had was that we don’t want anybody else to have the data 鈥 we don’t even want the data. We want that to stay in the classroom and on the computer, said Andrew Sliwinski, head of product experience for Lego Education. From a technical and design perspective, Sliwinski said, 鈥淚t’s much easier to just send data to the cloud or use one of the big APIs [Application Program Interfaces], or one of the big companies that are out there. But when you do that, you sort of betray that principle of being able to guarantee privacy and safety to the child, and to the parent and to the teacher.鈥

Maybe Big Tech could learn a thing or two from Big Toy.

In an interview with Mark Swartz, Sliwinski explains his role, the evolution of the curriculum and his hopes for AI more broadly. 

This interview has been edited for length and clarity.

What do you do at Lego Education?

My team is responsible for product strategy, design, engineering and, most importantly, the educational impact of our product. So really the development of our learning experiences from end to end. Lego stole me from the , where I worked on creative tools for children for many years, including, most notably, , which is a programming language for kids. 

Were you in the classroom before that?

I started working in education in 2002. I was living in Detroit, working as a tutor, and I was invited to support students in Detroit public schools with the Michigan Educational Assessment Program, the state鈥檚 big standardized test [at the time]. I’ve basically been working in some way, shape or form in education ever since. 

What do you see as the through line between that work, and what you鈥檙e doing now?

When I showed up in Detroit all those years ago, my biggest reflection was: These are kids that don’t see the purpose in mathematics. They don’t feel connected to it. They don’t understand how it connects to their lives. And so for me, it was like, 鈥淲ell, let’s solve that problem. And yeah, the rest is history. 

Were you a Lego kid yourself? 

We didn’t have Legos, but we had all manner of other building materials at our disposal, like cardboard boxes and wooden blocks and access to hammers and screwdrivers and all of that fun stuff. So I grew up building things and learning through making. 

Why is it important for children to understand what鈥檚 behind AI?

The phrase AI literacy is being used a lot, and I think it’s being used in a very general way that is sometimes unhelpful. AI literacy is about more than how children use AI. It’s about those foundational literacies that help children understand what AI is, because I’m not just interested in children developing an understanding of how to use ChatGPT to do a specific project or a specific location. I want children to understand what probability is. I want children to understand that machines reason differently than humans do 鈥 and why that is. I want children to understand that AI learns from data, and that data can have biases, and that data can have ethical considerations, and that data output is only as good as the input, right? Garbage in, garbage out. 

What does responsible AI education look like for young kids?

What we’re moving forward with with Lego education is really focused on 鈥 those foundations. The way that I sometimes like to talk about it with the team is: So much of what is being put in front of kids today is like learning how to use the black box of an AI model or an AI tool 鈥 I鈥檓 much more interested in giving the kids a screwdriver and letting them take the box apart. 

But that last analogy is figurative. 

Yes. There are no screwdrivers that come in the box, but not as figurative as you might think. In the tool, the kids actually get to train their own machine learning models 鈥 So a bunch of kids will work together in a group of four. That’s something that’s different. It is collaborative. 

What lessons can we draw from the use of earlier technological developments, such as TV and the internet, in building products for young kids?

These technologies are most effective when they serve as a catalyst for joint engagement between children and adults together, rather than sort of acting as a digital babysitter, whether that’s cartoons or whether that’s Club Penguin [a Disney game that ran from 2005 to 2017]. 鈥 

One of the most powerful things that you can say to a child is, “I don’t know. Let’s go figure it out together.” And I think that there’s so much that parents and teachers and kids don’t know about AI, but that kids are curious about. And us expressing our own curiosity, and supporting that curiosity and engaging together is a really powerful thing. 

What guardrails has your team put in place for young children? 

When we started working on this, one of the things that was really important was to have a set of principles and a set of lines 鈥 we call them red lines, lines that we will not cross 鈥 because I think it’s so easy when you’re working in technology development to sort of lose track of some of those principles. We established that way, way early in the project. 

Some of the ones that are maybe less apparent are things like [how] no data from the children will ever leave the computer. It is never transmitted over the internet. It is never saved to disk. It is never sent to Lego. It is never sent to any third party. And if you look at the predominant paradigm and a lot of the tools that are out there, that is not the case. 鈥

鈥e’re the Lego Group. If we don’t care about child safety and well-being, who does? And so I think it’s been this huge responsibility, but also like this really great opportunity for us to put forward something that we feel lives up to our values. 鈥 People are always surprised by how much my team goes around the world testing in classrooms, testing with children and talking with educators and experts. We even have child developmental psychologists that are on staff. And so much of what we do is about developing the right things in collaboration with young people and educators. 

How did you test the experience with young children?

One of the most recent tests that I [did] was testing some of the AI features for the very young kids 鈥 the kindergarten to second grade group [in Chicago public schools.] One of the things that we do as the product matures is we stop being the teachers in the classroom and we actually just give the box to a 鈥 teacher in their normal day-to-day classroom and we say, “Good luck.” And then we watch, because it’s not enough for the kids to have a great experience when we show up knowing the product and we teach it. 鈥 It has to work for the teachers, otherwise it doesn’t matter. 

One of the most interesting, but also humbling things that you do as a designer for children and teachers is taking it into the field, right? Because all of the assumptions and ideas and intentions that you have, they go out the window when you put it in front of a 5-year-old. That process is just so rewarding.

Second graders try out the new Lego Computer Science and AI kits. (Image Courtesy of Lego Education)

Did anything surprise you about how they put it to use? 

I was observing a group of 4- or 5-year-olds, and they were working on this lesson where they had to build a toothbrush for a dinosaur. Part of that was figuring out how motors work and how sensors interact, but it was kind of a funny setup 鈥 the dinosaur mouth that we had built had these big teeth in it. 

The 5-year-olds didn’t see a dinosaur. They saw a swimming pool, because the bottom of the dinosaur鈥檚 jaw had these big teeth around it, and they were like, “Oh, it’s a swimming pool.” So then they designed dinosaurs that went into the swimming pool. 

You kind of come in with these stories and intentions of what you think kids are going to connect to. 鈥 And then you get there and it’s just one little detail of how the model was designed just throws the whole lesson out the window.

How are educators responding?

We’re doing this in a way where the teacher is able to come along for the journey, where we’ve prepared all of the materials that are necessary for a teacher, who often feels less confident about computer science and AI than their students do, giving them everything that they need to feel not just prepared, but to feel confident. 

There’s this kind of power dynamic that’s happening with AI today, where we’re more focused on what computers can do than we are on what children can do right now. And I think that’s really fundamental to our approach 鈥 When you get a bunch of kids together to train a Lego robot how to dance, this kind of fear dissipates. They see the cause and effect between the model that they trained and what’s happening in the world, and they realize that the machine only knows what they taught it. 

The AI is no longer the smartest thing in the room. They’re the smartest thing in the room, and the AI is a tool. 

]]>
Opinion: What Education Leaders Can Learn from the AI Gold Rush /article/what-education-leaders-can-learn-from-the-ai-gold-rush/ Sun, 25 Jan 2026 11:30:00 +0000 /?post_type=article&p=1027403 Every week, my 7-year-old brings home worksheets with math problems and writing assignments. But what captivates me is what he creates on the back once the assigned work is done: power-ups for imaginary games, superheroes with elaborate backstories, landscapes that evolve weekly. He exists in a beautiful state of discovery and joy, in the chrysalis before transformation.

My son shows me it鈥檚 possible to discover something remarkable when we expand what we consider possible. Yet in education, a system with 73% public dissatisfaction and just , we hit walls repeatedly.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


This inertia contributes to our current moment: steep declines in reading and math proficiency since 2019, unfilled or filled by uncertified teachers, and growing numbers abandoning public education.

Contrast this with artificial intelligence鈥檚 current trajectory.

AI faces massive uncertainty. Nobody knows where it leads or which approaches will prove most valuable. Ethical questions around bias, privacy and accountability remain unresolved.

Yet despite uncertainty — or because of it — nearly every industry is doubling down. Four major tech firms planned for 2025 alone. AI adoption surged from of organizations in one year, with expecting AI to transform their businesses by 2030.

This is a gold rush. Entire ecosystems are seeing transformational potential and refusing to be left behind. Organizations invest not despite uncertainty, but because standing still carries greater risk.

There’s much we can learn from the AI-fueled momentum.

To be clear, this isn’t an argument about AI’s merits. This is a conversation about what becomes possible when people come together around shared aspirations to restore hope, agency and possibility to education. AI鈥檚 approach reveals five guiding principles that education leaders should follow:

1. Set a Bold Vision: AI leaders speak in radical terms. Education needs such bold aspirations, not five percent improvements. Talk about 100% access, 100% thriving, 100% success. Young people are leading by demanding approaches that honoring their agency, desire for belonging, and broad aspirations. We need to follow their lead.

2. Play the Long Game: Companies make massive investments for transformation they may not see for years. Education must embrace the same long-term thinking: investing in teacher development programs that mature over years, reimagining curricula for students’ distant futures, building systems that support sustainable excellence over immediate political wins.

3. Don’t Fear Mistakes: AI adoption is rife with failure and course corrections. Despite rapid belief and investment, . Yet companies continue experimenting, learning, adjusting and trying again because they understand that innovation requires iteration. Education must take bold swings, have honest debriefs when things fall flat, adjust and move forward.

4. Democratize Access: AI reached globally in 2025. While quality varies and significant disparities exist, fundamental access has been opened up in ways that seemed impossible just years ago. When it comes to transformative change in education, every child deserves high-quality teachers, engaging curriculum and flourishing environments.

5. Own the Story, and Pass the Mic: Every day, AI gains new ambassadors among everyday people, inspiring others to jump in. The most powerful education stories come from young people discovering breakthroughs during light bulb moments, from parents seeing children thrive, from teachers witnessing walls coming down and possibilities surpassing imagination. We need to pass the mic, creating platforms for students to share what meaningful learning looks like, which will unlock aspirational stories that shift the system.

None of this is possible without student engagement. When students have voice and agency, believe in learning’s relevance and feel supported, transformative outcomes follow. As CEO of Our Turn, I was privileged to be part of efforts that inspired leaders and institutions across the country to invest in student engagement as a core strategy. : all eight measures of school engagement tracked by Gallup reached their highest levels in 2025. This is an opportunity to build positive momentum; consistently demonstrates engagement relates to academic achievement, post-secondary readiness, critical thinking, persistence and enhanced mental health.

Student engagement is the foundation from which all other educational outcomes flow. When we center student voice, we go from improving schools to galvanizing the next generation of engaged citizens and leaders our democracy desperately needs.

High-quality teachers are also essential. Over are filled by uncertified teachers, with 45,500 unfilled. Teachers earn than similarly educated professionals. About result from teachers leaving due to low salaries, difficult conditions or inadequate support.

Programs like prove what’s possible: over 90% of new teachers returned after 2023-24, versus just under 80% citywide. We must create conditions where teaching is sustainable and honored through higher salaries, better working conditions, meaningful professional development and cultures that value educators as professionals.

Investing in teacher quality is fundamental to workforce development, economic competitiveness and ensuring every child has access to excellent instruction. When we frame this as both a moral imperative and an economic necessity, we create the coalition necessary for lasting change.

Finally, transformation must focus on skill development. The workforce young people are entering demands more than technical knowledge; it requires integrated capabilities for navigating complexity, building authentic relationships and creating meaningful change.

At , we’ve worked with foundations and organizations to develop leadership skills that result in greater innovation and impact. Our goals: young people more engaged in school and communities, and companies reporting greater levels of innovation, impact and financial sustainability.

The appeal here is undeniable. Workforce development consistently ranks among the top priorities across political divides. Given the rapid rate of change in our culture and economy, we need to develop skills for careers that don’t yet exist, for challenges we can’t yet imagine, for a world that demands creativity, adaptability and resilience.

The AI gold rush shows what’s possible when we set bold visions, invest for the long term, embrace learning from failure, democratize access and amplify voices closest to transformation.Our children, like my son drawing superheroes on worksheet backs, are in chrysalis moments. The choice is ours: remain paralyzed by complexity or channel the same urgency, investment and unity of purpose driving the AI revolution. We know what works: student engagement, quality teachers and future-ready skills. The question isn’t whether we have solutions. It’s whether we have courage to pursue them.

]]>