podcast – 社区黑料 America's Education News Source Mon, 06 Apr 2026 20:26:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 /wp-content/uploads/2022/05/cropped-74_favicon-32x32.png podcast – 社区黑料 32 32 Behind the Reinvention of Summit Public Schools With AI /article/behind-the-reinvention-of-summit-public-schools-with-ai/ Tue, 07 Apr 2026 14:30:00 +0000 /?post_type=article&p=1030804 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In the latest episode exploring new school models powered by artificial intelligence, Summit Public Schools鈥 Cady Ching and Dan Effland join Michael Horn and Diane Tavenner to discuss Summit鈥檚 transformation into an AI-native school model. The conversation examines how clarity around school outcomes and model design enables the effective integration of new technology, followed by insights into the evolution of Summit鈥檚 expeditions. Ching and Effland emphasize the importance of a holistic, purposeful education, as well as the need for a robust technology infrastructure to scale innovation.

Listen to the episode below. A full transcript follows.

Cady Ching: I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum, it’s not an LMS, it’s not a schedule by itself, it’s not a set of beliefs or a graduate profile by itself. Those are parts of a model, but a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together. And it is not going to be replacing a school model. It’s going to expose whether or not you actually have a model. And I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning and you’re not sure how they’re showing that or what adults are responsible for, AI just layers on complexity and, quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better.

I think the jury is out on the tools that we need and how we can create the tools that we need. But AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane, it is good to see you with some excitement for today’s episode.

Diane Tavenner: Yeah, we have a real treat today. We’ve got two of my favorite educators in the world joining us for what I’m sure is going to be just a really interesting conversation.

Michael Horn: Well, and for years, as obviously I’ve learned about Summit from you, direct from you, and yet it’s been nearly 3 years, I think, since you passed the baton, if math is still a thing. And I know from afar that the team continues to be among the most innovative schools in the country and so I know that they continue to think about reinvention, and frankly, you know, what does Summit need to look like? How can it get even better? All these questions for its learners. And so I’m incredibly excited to dig in and learn about what they’re calling Summit 3.0 on today’s show. I will say it’s also interesting to have this conversation because we’re sort of in our model geek out, if you will, at the moment, right? While we’re having this conversation, we’ve had the founders of Alpha School, Flourish on, both of which are designed as AI-native models. And for those who listened to those episodes we sort of created a little bit of a side-by-side, if you will, where we said, hey, Summit is here as this baseline for a pre-AI model trying to do personalization or optimization of each kid’s learning. And we explored what can you do in an AI-native world? How can you design differently? But today what’s exciting, I think, is we’re going to get to dig into what does it look like for an existing model with that orientation to become, quote unquote, AI-native.

And as you know, transformation and how organizations reinvent themselves, that’s something I get really passionate about and excited. So I cannot wait to learn from the real-life example in progress.

Diane Tavenner: Well, we’ve got the two perfect people for that conversation, Michael. And so let me introduce you to Cady Ching, who is the CEO of Summit Public Schools, where she was an extraordinary teacher and school and network leader for a decade before taking on that role. So she brings this full spectrum of experience to this next phase. And Dan Effland, who is the senior director of innovation at Summit, where he was also an extraordinary teacher and school leader before taking on this new role of leading for the second time in the history of Summit, the reinvention of the model. And so welcome, Dan and Cady. We’re so happy that you’re here with us and excited to talk to you about the work you’re doing.

Cady Ching: Thank you. Thank you so much. I’m excited too. It’s coming at this moment for Dan and I where we’ve been trying on a lot of language about where we’ve been, where we are today, and where we’re going. So selfishly, this is a milestone for us.

Michael Horn: Well, and I get to feel like I’m jumping in on a team huddle of y’all. Yeah, this will, this will, this will be fun.

Cady Ching: Welcome, Michael.

Michael Horn: Thank you.

What Is a School? 

Diane Tavenner: Dan and Cady, a few weeks ago we got together and you walked me through the thinking and planning you’re doing. And honestly, I was captivated, you know, because I got stuck on it and I wanted to dissect every word. By this simplest definition of school, it’s honestly the simplest definition I’ve ever read of a school. And I wanted to start there today because I really think we always have talked about getting to the simplicity on the other side of complexity. And I think you’ve done it with this definition, and I think it’s going to be really powerful in this next chapter. And so maybe, Dan, kick us off. And if you will share that definition and a little bit about how it came to you or how you all came to it in your process and what you think it unlocks.

Dan Effland: Yeah, happy to. And thanks for having me here. I’m so excited to talk to you all. Yeah, so, I mean, we’ve been working on this for years, right? What is simplicity on the other side of complexity? And I think as we’ve been digging into what does redesigning look like, it became really clear that you have to get down to some foundational elements to avoid designing within conventions and not even really realizing you’re doing it. And so the way we’re thinking about schools is simply, it’s a group of young people. It’s a set of outcomes or competencies. And then it’s a set of resources that help you support young people to achieve those outcomes or competencies. That’s it.

Kids, outcomes, resources. And stripping all the way back to that has allowed us then to engage with our community, because all this work is like with students, caregivers, and educators, and go like, OK, what do we really want? What do schools really need to be? With full freedom, we call them dreaming sessions, where we can really engage off the simplest foundational elements and not get hooked by any of the conventions that have existed, you know, for decades or longer than that in a lot of cases.

Summit 2.0: Evolution and Vision

Michael Horn: It’s really cool because you’ve sort of, like you said, you sort of have a conversation around what those end posts, and we can sort of figure out what’s inside the box to get there apart from what’s always been there. But before we go to that sort of Summit 3.0 vision and where you’re thinking currently is, because I’m imagining you’re going to have lots of trade-offs and changes as you go through the design process, but I think it would be helpful to do a quick turn on Summit 2.0. Both to ground, frankly, our audience, but also to set up a question of how things are changing and where and so forth so that we can understand that. And so I’d love, and maybe Cady, you dive in on this first, how would you describe the Summit 2.0 model, which was not only in your schools, but schools across the country? It’s one of the reasons I think it can be called a model,  it’s scaled beyond Summit itself, right? And as you think about that, the new model, what is it in the Summit 2.0 that you’d say, we really want to hold on to this? Or where are the things that you’re saying, hey, actually, that’s something we can leave behind or start to question whether we want to change that?

Cady Ching: Yeah, thanks for asking this question. I think it’s so important. The reason why I keep smiling when you all say Summit 2.0 and 3.0 is because Dan and I actually got into it a couple weeks ago about if we wanted to use that language or not. And my issue with it was I think it’s really, it serves a purpose because like to Diane’s point, it is simplicity at the other end of complexity. And there is a danger in the simplification of the 2.0 and 3.0 because at Summit, we really think about innovation in two ways. One just being innovation through refinement, which is the day-to-day tightening of the model elements that we’re building on for these larger moments of innovation, which we call innovation for redesign. And so those are sort of the sector-shifting, big model, what we call Big M changes. But I’m going to use Summit 2.0 and 3.0 language today in shorthand.

Michael Horn: Thanks for doing it for the listeners.

Cady Ching: Yeah, and so Summit 2.0 really speaks to our personalization era at Summit, where we showed personalization doesn’t need to be a luxury. And we did that by designing cohesive student and teacher experience., and it included model elements like mentoring and skills assessment and differentiation using real-time data, which we enabled through tech. And the tech that we co-built was called the Summit Learning Platform. For me, what I think was most remarkable about what we proved in Summit 2.0 is what you mentioned. It was scalable, and it did scale, and schools were able to implement and sustain the Summit model on public dollars. Which was remarkable. And so we reached 100,000 students, 6,000 educators, and 400 schools across 40 states.

And we did it with district, charter, private, rural, suburban, and urban. It was completely shifting the field. And then we normalized mastery-based learning, personalized playlists and skills and habits in a way that now is the foundation and the baseline in so many places that we’re now talking about building these AI-native models on top of. And so to the second part of your question, which I’ll kick off and then, Dan, I’m going to pass it to you to add on, we think about model elements and processes that we want to carry forward into Summit 3.0. In the process side, which is where I thrive, we were successful because we were leading from this intersection of the learning science, community engagement, and technology, and we centered teachers and students at every part of the design.. And we’ve used those same design principles to continuously improve our model since Summit 2.0. For me, I feel like we’re 4 years into Summit 3.0, and we’ve already gotten some really exciting data back about situating us as leaders in the field again around what we’ve built on top of the personalization.

In last year, this is our most recent data, we saw that our Summit alumni have some of the highest post-graduation incomes and lowest debt loads, as compared to other top-performing charters. And this is the type of longitudinal outcome evidence we’ve been really longing for. And when you think back about how Dan just defined the system, what that data does for us is it grounds us in that we do have a really strong set of outcomes and competencies that are timeless. Our young people are now achieving them, and we’re letting go of the old technology to create space for AI-reimagined infrastructure that’s going to help us to better allocate resources. And we think our biggest resource levers are people, technology, and time. So that’s really how we’re thinking about Summit 2.0 setting us up for Summit 3.0.

Michael Horn: Dan, did you want to jump in there and add some?

Dan Effland: Yeah, yeah, I think I’ll just like, you know, I think, you know, Cady and I were both teachers in Summit 2.0. We were both school leaders in this, and so we have a lot of really direct connection to it. And the thing that really makes me think about it is like, you know, the learning platform is no longer in existence, but the elements of the model really deeply took root. Mentoring, mastery, what we called habits of success, I think we’re calling durable skills in our world now. Like, I’m fine with it, whatever we want to call it. It’s become ubiquitous. And I think it really helps. I mean, I think it really gives us a sense of a strong foundation of like, we’ve done this before, we’ve built a model that’s scaled and really stuck.

And it doesn’t matter if the technology, you know, is stuck or not, because that technology is not the model. The tech model is these elements of how you support kids to master these outcomes with whatever available resources you have are. And so, yeah, I think there’s a point of pride when we think about, you know, what we’re begrudgingly calling Summit 2.0. And then I think there’s a sense of the strength of the foundation to then build what’s coming next.

Personalization & Durable Skills

Michael Horn: It’s interesting. And we’ll come back to the technology, I know, and we want to circle back to that. But hearing Cady, you described the model, used a few words that I think are really important for people to hear. One of them was cohesive, because I think a lot of the tech efforts right now around personalization in so much of the country are the opposite of cohesive. And that’s why we’re seeing a blowback sometimes against technology, because it’s sort of all over the place and hundreds of things going on at once for a young person with tons of distractions. And you talked about it being grounded in the learning sciences and personalization as a, as a means, not the ends, right? And, and then you have these longitudinal outcomes. And I’m just calling them out because I think people often lose sight of, this is the bedrock, right, of how we build from, and then go from there. And the other piece, and Dan, you just referenced this, the field is now calling it durable skills.

I still prefer habits of success. Let me just be on record on that one. But one of the things you all really did well around Summit 2.0 was have incredible clarity on the mission, what success looks like, such that you could measure in the way you just said, Cady. And I didn’t know those stats. I mean, it’s fascinating., and then you had these commencement-level outcomes, right? You were super clear on what does it look like from a, you know, for a Summit graduate as they go out in the wild. And it seems in some ways those commencement-level outcomes have been precursors to the movement across states that we’ve seen in the Portraits of a Graduate. And I do think that there’s some key differences. I’ll hold my editorial back on what those are more because I want your take on that.

Like, what, if anything, are the differences and, and between those commencement-level outcomes that you all have defined, the portraits of a graduate that we see states doing, and more broadly, like, what’s the importance of being super clear on what those outcomes are and, and how you’d know, on the other side, if you could speak to that. And I don’t know, I’ll make it a grab bag of which one of you wants to jump in on that.

Dan Effland: Dan, take it away. Awesome. Yeah, I mean, so our vision has been the same for 23 years. It’s preparing young people for a fulfilled life, really all people. We think of our staff as part of that too. And fulfilled life is in some ways, again, simple. It is purposeful work, financial independence, strong community, strong relationships, and health. And so that’s given us a holistic picture, a holistic point B that we’re always going for.

You know, I don’t, I don’t know how I compare it to Portrait of a Graduate or Portrait of a Learner. What I know is it gives us a lot of clarity in that you can’t design a coherent model without clarity of where you’re headed. And that it’s also really important that that clarity is holistic and is not simply a set of academic outcomes. It is much broader than that. And that gives us a huge advantage in this work right now because we’re not spending a lot of time. We certainly talk to our community and affirm, you know, on a regular basis, is this still what people want? Is this still what our communities are after? And it is. And so we can move right to like, okay, how do we get there?

Cady Ching: The thing that I would add on top of that is, I loved, Michael, what you called out around the language of a model. I think that at the operator level, and when I’m talking to, to other school leaders, this word is used in a lot of different ways. And I think what has been really helpful for me is to list the ways that a model is not. It’s not a curriculum. It’s not an LMS. It’s not a schedule by itself. It’s not a set of beliefs or a graduate profile by itself. Those are parts of a model.

But a lot of the building that we’re seeing right now is focused on building for parts versus building for an actual whole model. And so the AI-native model is how all of those model elements are working together, and it is not going to be replacing a school model, it’s going to expose whether or not you actually have a model. And it’s, I think AI is forcing a lot of school systems right now to get really honest, because if you don’t know what students are supposed to be learning, and you’re not sure how they’re showing that, or what adults are responsible for, AI just layers on complexity and quite honestly, chaos. But if you do have the level of clarity of what Dan is speaking about, AI is actually making systems work a lot better, or it can make systems work a lot better. I think the jury is out on the tools that we need and how we can create the tools that we need, um, but AI really isn’t replacing, it’s revealing whether or not your school model actually exists.

Diane Tavenner: I鈥檇 love it if we go back to your simple definition, Dan, that we started with, when we sat down. You use the word package of outcomes, and I was obsessed with that word package for this reason, because you know, maybe I will jump in here a little bit on the portrait of a graduate. 

Michael Horn: The table’s been set for you, Diane. 

Diane Tavenner: Yeah. And one of our, you know, Summit’s longtime beloved board chair, board member, who honestly is one of the most forward-thinking, I think, philanthropists who launched a scholarship for Summit graduates going into Pathways years ago, like ahead of the curve, you know, sent us a note the other day with a real critique of portraits of a graduate. He was sort of reading about them and was just very, you know, like, what are these people thinking? And I think what he was responding to was a lot of the portraits of the graduate, like, feel very checkboxy and compliance-oriented. Versus this sort of holistic. And I know that’s not the way they were intended.

AI Evolution in Education Models

Diane Tavenner: They all have good intentions behind them, but the way they have been sort of brought to life and then communicated and then implemented are what Cady, I think, is speaking to, not as a model, but as like these individual components that don’t have a coherence about how they’re actually organized an organized set of resources to achieve those package of outcomes, if you will. And so I think that what you all just described is at the core of your success going forward and what an advantage you have. And it really speaks honestly to the durability that you’re carrying all of that forward in this next phase, that being, living a life of wellbeing it actually hasn’t changed, right? The elements of that haven’t changed, and that’s what you’re equipping young people for. So, you know, in a recent episode, Michael and I had a conversation, just the two of us, which was super fun, and we were dissecting a way of thinking about school models in three buckets. And I know you are both familiar with this framework, which is essentially that, you know, Model 1 will use AI to make sort of the existing industrial model school more efficient and better. Model 2 will stretch the bounds of that industrial model school with integrated AI. And Model 3 will be AI native, you know, essentially built from the ground up with AI capabilities that are assumed to be at the core. And, you know, as you think about where you’re now going with Summit 3.0, how do you view it in the context of this framework? And, you know, what does AI make possible that wasn’t possible in 2.0 because it was designed pre-AI?

Dan Effland: Love this question. And I did listen to that episode. So I’ll start with the model part, and then I really want to get into what AI makes possible and kind of what it pushes us to do. So I love reading like Learner Studios’ 3 Horizons model. I love Bob Hughes’ paper on the 3 models. I find that stuff really, really important for evaluating what exists and really valuable for visioning and for getting into this place of what really is possible. And I think, and that’s really useful. I will say, when we start designing and working with our young people and working with our caregivers and our educators, I actually find it useful to kind of set those categories aside and to ask the more foundational questions around, like, we know where we want to go, we have this clear vision, we have this really simple, you know, conception of what a school is with kids’ outcomes and resources.

And now let’s go from here. And when you get into, like, as we’ve talked about, we have a lot of clarity about our outcomes already. We really believe deeply that this holistic model of a healthy, thriving, you know, young person, young adult, adult is going to be durable regardless of the transitions that are happening in our society. But when it comes to the resources part, now we have this whole huge different potential, one, AI being a resource, but also a way that I think we’re most really interested when it comes to AI is how we can use it if we integrate it into our tech stack. Really how, like, with a really robust knowledge graph and really strong data layer, you could be dynamically reallocating resources in a way that just would be impossible for people. You know, like when I used to build an annual schedule, like the primary schedule with our Dean of Operations, she and I would sit in an office for a week with a spreadsheet to make a schedule for the year that never changed, right? Like, it’s just so labor-intensive. But now I think when we think about AI as part of our infrastructure, and it’s kind of a layer in our tech stack interacting with a really robust knowledge graph and data layer, we can start to ask ourselves, like, how do we get the right resources to the right kids at the right time for the right outcome? And really get very, very precise, and also do that dynamically. And I think that then allows us to think about personalization, just-in-time instruction, integrating real-world experiences, ensuring that personalized learning still happens in community and there’s deep human connection that is part of personalized learning journey in a way that was, was not possible when, you know, 12 years ago when we were thinking about Summit 2.0, the technology just didn’t exist.

And so, I mean, it’s exciting. I mean, I really think there’s incredible possibility there. And while there’s definitely lots of really cool tools being built, we’re much more focused on the, like, where does this fit as part of our technology infrastructure or our tech stack, because we think that’s, like, potentially a huge lever for transforming learning for young people.

Current Applications of AI in Schools

Michael Horn: It’s fascinating to me, ’cause you just named a number of things that AI could do that I had never thought about in terms of, like, dynamically changing the schedule for, you know, the school and students and, like, there’s some pretty cool things you can start to imagine that ripple out of that. One of the things in that conversation that Diane referenced that she and I agreed to hold ourselves accountable for was to get really specific when we talk to school leaders about, so what’s happening today in your schools that’s actually leveraging AI or is quote, unquote AI native, if you will? And so you all are obviously still in the design phase for 3.0. I use that with trepidation now, but put that aside for a second. Like, today, if I were to, you know, get to be in California again and I was hanging out in your schools, what would I see that’s powered today by something that’s AI native? What is it? What are the tools? What does it look like? What does it do? What are you building versus partnering with? Give, give us a sense of some concrete applications. Anywhere in the tech stack or during the day, that is AI-powered?

Cady Ching: I think this would be a good opportunity to talk about a specific tool that we’re using, which maybe not ironically is Futre as one model example of what it can look like. And Dan can speak to specifically what it’s looking like in the student and teacher experience. But one of the reasons why I start with speaking about a specific tool is because I think that largely edtech has not鈥 has been really unsuccessful in solving for what we need to operationalize innovative school models. And Futre has been a nice shift of pace for us because it is truly a tool that is building for the child versus fitting a child into a tool or larger system. And I think that the way in which we’re using it with our young people can work in many H2 and H3 model contexts because it’s able to give us real-time data about our young people and then allowing us to build their student experience based on the data that we have about them. Dan, can you introduce, Michael a little bit more to Futre and how we’re using it at Summit?

Dan Effland: Yeah, absolutely. So Futre right now we’re using with our juniors and seniors, although we anticipate starting younger, in the coming year. And right now, our juniors are really using it to do a lot of career exploration, which the tool excels at, and really like exploring very deeply different possibilities. And then what those possibilities mean as far as what they need to be working on now or experiences they have between kind of their current point A and their future point B. And then our seniors are using it to get more concrete about what really, what is my next step? What does that mean? What is the thing I’m doing immediately after high school?  鈥 I think we deeply believe this and will proudly say it is best-in-class career-connected learning. It is. Absolutely. It is the thing when we do 鈥 when I do focus groups, when we do alumni data, kind of research, it just comes up over and over again because our young people actually get out in the community or within the school building and really doing what we now are calling real-world experiences. We’ve called them lots of different things over the decades, but we are 鈥 one of the things about that though is that kind of like we were talking about, how do we really curate the journey with this resource allocation stuff? Just tracking all of those different experiences, often there’s 50 or 60 choices for students at one school when we had those expedition cycles. We’re now pulling those experiences onto the Futre platform so we can really start to map what students have been doing, what they haven’t been doing, maybe what they should be doing. And then their mentor can take an even more engaged kind of role in coaching them through that pathway. We’re really excited about that.

We’re kind of just starting, you know, to pull those on. But I think in the future it’s one of the things that we see that the Futre tool will be really, really helpful with because, you know, young people need coaching as they’re figuring out that concrete next step.

Michael Horn: So super interesting. I actually have two questions, but let me go to you, Dan and Cady, first. And then I have a question for you, Diane. I’m going to put you on the hot seat. But I think we’re allowed to do that. But it’s interesting. You just said something there in your answer, Dan, which was then the mentor or coaching.

And so just like to put a fine point on it, The, like, this works really well because you have a model where there is that function that is meeting on a regular weekly basis, right? And like, so therefore that touchpoint, like it’s coherent again to use that word, but I, I would love a quick update on how Expeditions has evolved because when I think when Diane was exiting Summit, like, y’all were in the middle of redesigning it and I’ll be super honest, like even though she and I talk basically weekly, I don’t actually know the new version of Expeditions. And so, I still have a slide in my talk about Summit that says, you know, like every 8 weeks or whatever, you go off for 2 weeks. And y’all should update us on what’s the current state of Expeditions at Summit.

Cady Ching: Yeah, I’ll respond to 2 pieces. One, with the mentoring piece, that model element does exist. One of the reasons why I personally love Futre is because it takes some of the lift of mentors needing to be the vessel of all career pathways off the human. So when we think about that resource allocation of, you know, people, talent, it’s creating a better, more coherent system for the adult as well, which has been so important because we love to center our teachers as well in the design. And then the Expeditions redesign, it’s been really cool. We’ve been, you know, continuously shifting that program based on what our alumni are sharing back with us, based on how the world is shifting. And of course, AI, as so much a part of our students’ experience today and in the future, has shifted it again. It is non-graded鈥 so this is actually surprisingly one of the most controversial things when we rolled it out to parents鈥 they are not receiving grades on the different career exposure pieces that they try out as they’re with us at either the high school levels or as early as 6th grade in Seattle.

And it’s really about ensuring our students get about 9 career exposures between the time they start with us to the moment they leave, because we know it’s really important for them as they develop their identity to see themselves in different career pathways that are all mapping towards high opportunity where they can build their generational wealth for their family. So it’s probably pretty similar in terms of the time allocation. They’re in sort of what we call their core classes for 6 weeks, and then they’re pausing for 2 weeks to go out, usually in the upper grades, off campus. You don’t see 鈥 when people come to observe this on our site, they’re not actually a lot of kids in the building because learning happens without walls. Dan, what else would you add as you’re going? Dan is quite literally on an expedition tour currently. He’s at one of our school sites right now, and right after this recording, he is going to go in and speak to our teachers. So what else would you add?

Dan Effland: Yeah, I mean, I think that’s an important side of it is so that, I mean, one, it’s just, I was still in a school leadership position when we transitioned to this kind of redesigned Expeditions, and I just can’t tell you how powerful the experiences are. I can think of so many stories, so many young people, but like one in particular that a young, he’s 鈥 well, he’s probably not even that young now, but he’s 25, but he was a young, young man at the time who was really, really struggling. And this kid was having discipline issues, attendance issues, struggling, like, not necessarily living at home on a regular basis. And we really, we thought we were gonna really lose this kid. And he started doing an expedition experience related to culinary arts. After he did that first one, he did a second one, and then there was kind of a sequence of them where he had, you know, like the first one was kind of like a survey course. It was the community college. It was about 25 kids.

Finding Passion and Purpose

Dan Effland: Then he was able to do one where he was actually kind of shadowing one of the actual culinary arts program college students and learning in a second wave. So I’m having a hard time not using his name, but I’m going to keep it out. But I just loved this kid. And he found his pathway. And not only did he find his pathway and ended up going to a culinary arts program and graduating and now works, you know, like in the culinary arts, you know, scene in Seattle, his attendance improved, his grades went up, his connections with his mentor, with his teachers, with his peers, which were, you know, fraught, got better and better. And he became a healthier human because purpose and passion and having a pathway is essential for all of us. And we’re at a time when, you know, you can read about this everywhere, there’s studies, our young people are really searching for that clarity about purpose and pathway. And when you see it, I mean, it’s just like Cady said, it’s kind of hard, like it’s not a good thing to tour because the kids are mostly out in the community.

Dan Effland: But when you have the privilege of being a school leader and you see these kids over the years and they do their cycles, you just, the impact is unbelievable. So yeah, I just wanted to, yeah 鈥

Designing Education for the Child

Michael Horn: No, the anecdotes make these things always so much more powerful. And I mean, you can, through your story, hear him building a positive identity of himself, right? And that’s incredible. Diane, something Cady said made me think of it, which is obviously, you know, folks who listen to us know that you’re the entrepreneur behind Futre. I now understand why it was originally called Point B based on Dan’s language and I guess, but she said something interesting, which was like a lot of edtech has not helped the launch of new model design, right? Because it’s been, and that, that’s sort of been obvious to me for why, right? Because the market is schools as they are, and venture capital wants big markets, and right, like, it’s 鈥 so it’s, it’s this sort of reductivist thing that happens. But she said you’ve been designing for the child, and so you’ve been able to escape that and I wondered if you just might want to reflect on that, because I imagine it is still hard though, um, because you’re still like 鈥 schools are the conduit to the kids. So just sort of like, what’s the advice, or what have you learned, right, through, through navigating that?

Diane Tavenner: Well, I think that I mean, so much of what Dan and Cady have just said is so important. And I think that what, what was one key thing is, you know, I sort of set out to build Futre as an edtech partner that did things differently than what I experienced when I was sitting in, you know, the seat that Dan and Cady are in. And you know, that core value of our company is how we do the work is as important as the work that we do. And so how we do the work is very much co-building with schools and leaders and students. And so, you know, we are out in the field working with students and teachers and people like Dan and Cady literally every other week. So we are literally co-designing and code building what happens. And so what you just heard, that Futre is being designed to help young people build this identity over a 10-year journey. I mean, that’s unheard of, I think, in any sort of tech market.

People don’t think about that. We have real outcomes that people are aiming towards, and most tech products just look at what’s something that exists and try to make it more efficient or slightly better. They don’t think about the integration of it, the flexibility of it, how it will be used by the adults. I mean, As an example, they just told you Futre can be used both in individual coaching, mentoring, advising, counseling. It can also be used with groups of students in a classroom, and it’s actually literally designed to support both of those. And I will say the, the inclusion of really supporting real-world experiences came directly from our engagement with our school partners and our students. That emerged as this real need And we were watching people literally running around schools with laptops on their arm and all these spreadsheets and trying to organize. And so we have co-built these elements together.

But you’re right, the incentives in the business side of things are not to build this way. And so, you know, like always, we’re going to see if we can prove that wrong and say, no, when you do build this way, you not only get better outcomes for young people, schools and teachers and educators, but you also can be a successful, scalable product.

Michael Horn: So certainly a more enduring product if you, if you thread that needle, right? So for sure.

Cady Ching: Yeah, exactly. So I think it’s I think it also speaks to why it’s so important for Dan and I to sort of pull together a coalition of the willing with other operators. One thing we haven’t spent 鈥 I know we’re almost at time 鈥 that much time talking about is how hard this work is. It is challenging, and we have so much to learn. We are not perfect. We are learning every single day. We are constantly seeking out other school systems that have similar visions for education, and we’re trying to learn from them. We’re trying to get out onto their campuses and be in community with them because we know that if we want to build something that’s enduring and lasting and maximizing impact on the number of students in our country, or even globally, we have to build for the students of Summit as well as all students.

And I think that, that’s what’s most important for me as I set out to lead some of this work is if it only works at Summit, it’s not good enough. And what we’ve learned about leading change at scale is that we need a shared purpose for what school is actually for, and that belief that it’s possible to build a system for that purpose, which is actually no small feat. And it’s why we’re spending so much time building what I would call a coalition of the willing, which is educators and systems who agree on our common destination before we start building the actual tools. I think my core idea is that beliefs come first, model comes next, and then the tools come last. And when we get that order right, that’s when the scale can become possible.

Summit Learning: Model vs. Technology

Diane Tavenner: Cady, I want to double-click on what you’re saying because, you know, you talked at the top of this about how Summit Learning had really scaled across the country to 40 states and, you know, 100,000 students, etc. But Dan, you also said the technology, the Summit Learning platform was not the model. It is not the model. And the model has really taken root even as that particular piece of technology has gone away. That said, I do know that you both believe deeply that having an aligned core technology that is the infrastructure that sort of I think, Dan, you used the word guardrails, like puts up the guardrails and the support for the model is profound. And I know that you’re in conversation with other folks who’ve done some at learning who are, who it’s taken root for them as well, but are having a hard time really keeping that model intact. And so talk about sort of the need for that infrastructure, the role that it plays and what you think it might look like in 3.0. And Cady, you just said it, no one’s going to build technological infrastructure for a single school or a single school system.

And so there has to be this coalition.

Cady Ching: We have to create the market.

Diane Tavenner: Yeah. And so talk about that because the market generally is not very coherent. And as I sit on the other side, it can be really confusing and hard so talk about how you guys are thinking about that.

Enabling Learning Through AI

Dan Effland: Yeah, I think this is something we’ve started to be spending more and more of our time on as we’ve gotten clearer in the work with our students and caregivers and educators this fall. We’ve gotten clearer about where we’re going. There is this need, which is that technology is not the model, but it is, you know, there’s a reason we talk about time, talent, and technology as the big levers with resources. It is a huge enabler. And I think the possibilities with AI as part of that technology infrastructure make it an even stronger enabler. So I’ve already talked about like the idea of like dynamically reallocating resources, which is, I think, I love in a conversation educators here, because I think sometimes it’s not the, like the shiniest thing to talk about, but we know that getting kids the right thing at the right time in the right sequence is often the difference between learning and not learning, between progress and not progress, and between finding that pathway and not finding it. And so, at a high level, when we’re thinking about that infrastructure, we need to make sure that, like, we have a really rich, you know, amount of data.

And there’s a lot of work to be done there. Our school systems historically have not put data together in ways where you can create what like a technology person would call the data lake in a way where you can really access that as you need it. And then the next element is going to be a really robust knowledge graph that is not just academic standards. It’s got to be much broader than that. And then, of course, the way that AI would then interact with that to allocate and think about your resources. And I’ll share too, like when we think about resources, I generally think of everything as a resource. My time is a resource, Cady’s time is a resource, our educators’ time is a resource, curriculum is a resource, YouTube is a resource. Anything that can help a young person move towards those outcomes, we think of as a resource, and how can we constantly repackage those and get them in the right order while holding onto the vision? Because I think there’s a version of personalized learning that I would call like individualized learning.

That’s not what we’re talking about. I believe this has to happen deeply in community and with really strong relationships and human connection. And so the personalized learning, then it’s actually more complex when you’re committed to maintaining community and relationships, because you’ve got to figure out configurations of young people and not just put everybody separately on a computer they have a particular pathway and so.

Cady Ching: And that’s what we’re seeing, we’re seeing people just run, sprint towards an outcome without doing the diligence. And I think that it’s resulting in a lot of binary. If you’re either tech-forward or you’re human-centered, and there is a way to bring that together and build a model that’s doing both and that’s what we’re setting out to do.

Dan Effland: Yeah. There’s another binary too, that we haven’t talked about, but we should stamp here, which is this binary of like, real-world readiness or academic foundations. And that we now, we have these camps and like, we’re all about academics and we’re all about the real world. And when you talk to students, you talk to students and caregivers and educators, no one thinks it should be an either-or. That’s the scarcity mindset we’re often in, an area that we engage in educators. And we’re deeply committed that our young people will be prepared with college-ready academic foundations and real-world readiness, which means for us habits of success, communication, collaboration, all executive functioning. That is has a purpose

Diane Tavenner: Yeah. One is, as Dan, your story of that student showed, the sense of purpose, which is connected to what my life will look like in the future, really is what drives everything for a young person, right? It’s how they’re forming their identity as they build that vision. It’s what motivates them to stick to the hard work every single day on this journey to get where, where they’re going, and so yeah, I think what you’re up to is really critical. I hope that a lot of schools and systems engage with you to create this demand in the market for this type of infrastructure, dare we say, you know, Summit Learning Platform 3.0 as well. Because I think that it’s really, it’s hard to conceive of a post-AI model that doesn’t have that. That real infrastructure.

And I know you all haven’t seen it or found it yet, but continue to make strides in bringing it to life.

Michael Horn: This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question: what will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving. Learner Studio is sponsoring this season on AI and education because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learner Studio’s mission and the innovators who inspire them at www.learnerstudio.com. 

So a good place maybe, Diane, to wrap up.

Should we pivot to our before we let you off the hook section? Cady, Dan, we have a tradition here where we, where we talk about something we’ve been reading, writing, watching, listening, whatever it is, not writing, listening to, and eventually I’ll get my verbs correct. But and then, so just often we try to keep it outside work, but we often fail. So, Cady, you want to go first, and then Dan, we want to hear what’s been on your playlist or bedside table, and then Diane and I will wrap it up.

Cady Ching: Yeah, sounds great. I have been鈥 I taught my 7-year-old what it means to brain rot. I don’t know if you’ve heard that term, but where you just sit on the couch and just kind of watch nothing for hours and hours. And we did do a Spider-Man and Avengers binge this past weekend. So that is something I have been watching a lot of. Reading is going to be hard for me to separate it from the professional. I’ve just been really deep in leader succession. I think to do this work, you need really strong talent in leadership pipeline.

And so I’ve been in HBR. I check the Marshall Memo every week to see what, what they’re pulling out, to really think about how I’m leading personally, locally, individually, but then also what the sector needs. Dan, I’ll pass it to you.

Dan Effland: Similarly, like the kind of first answer on my mind is just this fire hose of like white papers and podcasts about education and AI.

Cady Ching: And then he screenshots them and sends them to the whole team.

Dan Effland: Yeah, drive everyone nuts with them. But I do have a more, maybe a more fun one on the personal side. Kind of finally reading the Foundation series, the Isaac Asimov kind of classic sci-fi. It’s honestly about connection for me. My siblings are sci-fi readers and I’m very late to the party. And then my father is retired now, and one of his, it seems like, main activities as a retiree is to reread everything Asimov ever wrote multiple times.. And so for Christmas this year, I got a stack of these really great, Half Price Books paperbacks of all the Foundation novels, and I’m starting to work through them.

And we have a text thread about them, and they are, it’s a wonderful story, it’s very complex, and it certainly does also make me think a little bit about the future of our world and AI and, and what, you know, where, where young people fit in that, but it’s also just been a really fun way to connect to the family.

Michael Horn: That’s cool. Wow.

Diane Tavenner: What about you, Diane? Well, picking up on that. So first of all, apparently this is not going to be a novel recommendation because this Apple TV series, I guess, is the most watched at this point. But we watched Pluribus, which was created by Vince Gilligan, who 鈥 yes, Breaking Bad. Yes, Better Call Saul. I didn’t watch either of those, but I was a huge X-Files fan

Michael Horn: Back in the day.

Diane Tavenner: OK. And so there is very much some X-Files feel here in Pluribus. But to what Dan said, and I think Foundation is related, I just find this series to be so provocative in the questions that it’s bringing up and sort of the contemplation of where we’re going as a society and how the choices we’re making each day might affect that and what we actually want. And I will鈥 I told you I would report back my goal. I did finish Ian McEwan’s novel that I pre-promoted. Yeah, yeah, yeah. But it was everything I expected and more.

It was just extraordinary. And I did both of those over the holiday. And I will tell you, I feel like I’m sort of in surround sound right now of asking these big existential questions along with everything from what’s happening in the news on a day-to-day basis to all the work in AI. So, but I would highly recommend it. Super provocative and interesting.

Michael Horn: Perfect

Diane Tavenner: Perfect. Crazy. Like, you never know what’s gonna happen next.

Michael Horn: That’s fun when you can’t predict it coming.

Diane Tavenner: Yeah.

Michael Horn: Yeah. Yeah. I was gonna say, so the brain rot theme that you brought up, Cady, I mean, we talk about it all the time with our 11-year-olds, here at home. But I was 鈥 this is not where I was going to go at all with this, but I 鈥 something one of my kids said made me think of the Animaniacs theme song, if you all remember that cartoon from back in the day, and I pulled it up and showed it, and my wife just dismissively said, this was brain rot when we were growing up. so, there you go. the one I’ll say is, we all went with another family and saw Wonder, at the American Repertory Theater. Many people may know the book, Wonder, which follows the story of Auggie Pullman, a 10-year-old who has Tretcher Collins, syndrome that presents as disfiguration of the face and sort of how going into a school environment for the first time and all the things that it does. And there’s a movie about it as well, but now there is a musical too.

And Diane, you will not be surprised, I was crying from the opening number and I kept it up through the whole thing. So it was, I was true to form. That’s a good one to cry over. It was good. I represented well, but it was fantastic. We’ll see if it makes the jump from sort of off-off-Broadway to something bigger, but until then, if you’re in the Cambridge area, definitely check it out. And for all of you, just huge thanks, Cady, Dan, for joining us, getting us to have a peek under the cover of what’s coming next at Summit and the broader 鈥 as usual, you all are thinking about the broader ecosystem as well, which I admire so much about the work you all do at Summit. It’s not just our model, but how does our model spur this greater change across education.

So huge thanks for joining us. And for all of you listening, keep the questions, comments coming. Diane and I feed off them, and we really appreciate all of you. We’ll see you next time on Class Disrupted.

Disclosure: Diane Tavenner founded Summit Public Schools and served as its CEO from 2003 to 2023.

This episode is sponsored by LearnerStudio.

]]>
The AI Behind Flourish Microschools /article/the-ai-behind-flourish-microschools/ Thu, 26 Mar 2026 16:30:00 +0000 /?post_type=article&p=1030396 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

John Danner, the cofounder of Rocketship Public Schools and now the founder of Flourish Schools, an emerging network of AI-native microschools, joined Michael Horn and Diane Tavenner to share what鈥檚 now possible when it comes to school design in the age of artificial intelligence that wasn鈥檛 previously possible. Danner explained how Flourish is leveraging AI to deliver foundational skills like reading and math through conversational tutors to free up teachers to focus on building relationships and nurturing students’ passions and “superpowers.鈥 

He also shared how they鈥檙e using the technology to provide real-time assessment and feedback on student projects. The conversational models can be much more powerful, he says, than previous edtech applications. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you again for our continuing conversations on AI.

Diane Tavenner: You too. This one’s going to be a fun one. You know, our most recent episode, we talked with Alpha School founder Mackenzie Price. Most people have heard of Alpha at this point. It’s getting a ton of attention. And so what we tried to do there was really move beyond the talking points and the marketing to really dig into the model itself, including specifically how they’re using AI, which is turning into a bit of our quest this season. And so this conversation today is a part of that exploration on who’s building what I would call maybe AI-native school models, if anyone. And, you know, what might they look like? What are they starting to look like? And it’s a really fun conversation today because we get to have a chat with an old friend.

Michael Horn: Yes, that is indeed correct, Diane. Today we’re going to get to chat with none other than John Danner. John, for those that don’t know him, has had a decorated career in tech before turning to education, as he co-founded and led NetGravity, the first ad server company, I believe. And after taking it public, selling it to DoubleClick, John went back to school and then became a teacher, and he taught in Nashville for a few years there. And then I think a lot of folks know him because he co-founded, of course, Rocketship Public Schools in 2006, which we, of course, talked about also in our last episode. But Rocketship was a buzzy school for a good while there, marked by its student outcomes, its use of technology, its expansion. And then after leaving Rocketship in 2013, John did a number of other things, including founding an online math tutoring company, creating some very interesting education investment vehicles and more. But I want to skip ahead to his most recent venture, Flourish Schools, which is what we’re going to hear about today.

Michael Horn: So, John, hopefully I did some justice to the bio, but, welcome. It is always good to see you.

John Danner: Thank you, Michael. Great to see both of you. Long time.

Michael Horn: This is going to be fun. This is going to be fun. So let’s start with grounding our audience. My assumption is that a lot of folks know Rocketship and what you did there. Far fewer know about the Flourish Schools model itself and what these schools actually look like. So maybe give us the basics, like what is Flourish Schools, how many of them are there today, how big are they, what’s the grade levels, what does a day in a student’s life look like at these schools? You know, paint the picture for us.

John Danner: Yeah, yeah. So we started Flourish about a year ago. We opened our first school last August. In Nashville, one microschool so far. They’re middle schools, so grades 6 through 8. I’m out in Phoenix today. We’re opening a couple more schools in Phoenix next year, next August. And I’d say the reason for doing it, you know, Diane knows this well, like doing schools is quite difficult work.

Enhancing Foundational Learning with AI 

John Danner: I often prefer being on the software side where, you know, life is good. But, you know, schools are hard work and sometimes you have to do them. I think the big motivator in starting Flourish for me was that I had started a couple of AI companies, Project Read, probably the most notable doing reading, which is in a lot of classrooms. And I just noticed that most schools are using AI in a very supplemental way right now, very much the same way they used edtech. And that bothered me because, you know, in reading, for example, I think there’s a pretty good argument that AI for reading is going to be better than the best human reading teacher within the next year or two. It’s not a long way off at all because teaching reading is really hard. Training teachers to teach that is hard. It’s hard to be patient with kids when they’re making lots of mistakes.

And it’s hard to remember everything a kid has ever done when they’re reading with you, right? All of which just is default for AI. So, you know, in watching Project Read roll out and seeing everybody kind of use it, you know, in those last 15 minutes in the class when they were kind of, you know, a kid was done with the assignment and needed to do something else. Like, I was like, you know, that doesn’t seem like how AI should, affects schools. It should be used more strategically. You know, what can AI do, and therefore what do you do with teacher time? I think, you know, for me, teacher time has always been kind of the scarce resource. It’s like whatever teachers focus on is really what schools do. No matter what schools talk about, it’s like, OK, what, what are your teachers doing? That’s what’s going to have the most impact. And so Flourish we, we started with the assumption that what we call foundations, kind of the basic skills, reading, writing, math, are going to be better taught by AI.

The way we kind of look at it is if you think of like Tier 1, Tier 2, Tier 3 instruction, it’s really the move from technology as a Tier 2 or Tier 3 product to a Tier 1. So, you know, can you use AI to do kind of tier 1 basic skills and standards-based instruction? And so that was what we did from day 1 at Flourish. We’re 6 months into it now. I would say the lesson learned is, of course, you’re going to have students in any school that like, you know, whatever. We have several special ed, several ELL students they need more time and attention. But during our foundations block, which is an hour long, teachers have time to work with them one-on-one. And a teacher working with a student one-on-one on reading or whatever is like a luxury that like no other school has because that you can’t have them doing that. But when all the other kids are making great progress with AI, having a teacher spend that time, that luxurious time is actually possible.

AI’s Impact on Schooling

John Danner: So that’s the fundamental thesis is that we can do that in a way that that’s what our teachers are not doing and spending all their time preparing for and teaching during the day. And that allows us to kind of come up with a new curriculum. And I think actually, you know, you guys want to focus on AI and we should. I think the actual interesting question with schools is once you make the commitment that AI is going to do a lot of this basic instruction, then you’re confronted with the now what problem, which is like, oh gosh, what’s school for like moving forward? And I guess that’s, that’s what we’re kind of excited about is we’re in this super serious time of change for students. They’re not going to grow up to a world that we all experienced. You know, my daughter just got out of college. She was a pre-med, but didn’t really want to be a doctor. She gets out in the job market and gosh, there are no jobs.

And like all those other things that she learned along the way about hustle and, you know, you got to go put yourself out there and whatever played out and she found a job. But boy, like if you had just spent all your time in school, like learning algebra or whatever, she wouldn’t have done well. So, I think, you know, our point of view at Flourish is we, we talk about 3 things mainly, relationships. So these are middle schoolers. So how do you get along with other people? And we do an hour we call circles, which is really as kind of therapeutic as it might sound, where kids are sitting in a circle talking about their feelings, how other kids affect them, et cetera. And for many, many of our students, I’d say it’s pretty mind-blowing to actually understand how other people are thinking, you know, as you’re talking and saying things and stuff like that. Really powerful.

So relationships are a big piece. And then we talk about two others, superpowers and passions. So superpowers is kind of our word for what people have called soft skills. I hate the term soft skills because it’s kind of denigrating in a world of like standards-based instruction. Oh, that’s the other stuff that, you know, makes you a human, but it’s not nearly as important as high school chemistry or whatever. Like, we actually think it’s the opposite now that knowledge is pretty abundant and accessible, like the things that make you human are the more important things. So, do you have agency and curiosity and these other things that make you awesome? That’s important. And then the passion side is really, what do you want to do when you grow up? What are you excited about? What are your big interests? Which, you know, as you know, for upper-income families tends to happen at home.

You know, you’re sitting around the table or you go, you know, on a little family field trip or whatever, and kids are discovering lots of different things that they might be excited about. Happens a lot less in working class and lower income families. We’re purposefully mixed income. We took a page out of your book for that, Diane. I think that’s really the right way to do this. And so for our kids who are, you know, working class and lower income, we think like discovering, what the world is and what you might want to be in is super important, especially in middle, so that you kind of enter high school with some idea of like what you’re excited about and some kind of path you might want to pursue. Even if that changes, that’s OK, you’re not just kind of clueless showing up in high school, which, you know, a lot of kids are.

Diane Tavenner: Yeah, super helpful, John. You know, one of the ways I’ve been trying to have conversations with people about what these sort of AI-native models will look like or can look like or do look like is I don’t want to have a conversation where we compare what they’re doing compared to like the old industrial model classroom, right, that’s like not useful to me.

John Danner: We’ve had that conversation. Yeah.

Diane Tavenner: So I keep using the sort of Rocketship and Summit because I know them the best of like best-in-class sort of personalized learning models that we were doing the very best we could at the time with the resources we had, and doing a lot of what you just described, right? Like, I’m assuming circles maybe comes out of Valor, which, you know, it has, you know. So like, a lot of that great stuff we were doing before. So what I’m really, and you’ve alluded to this, I think, with shifting Tier 1 instruction out of the classroom model and the AI is doing that. But let’s dig in a little bit deeper. Like, literally, what’s possible today that we just didn’t do 10 years ago and now we can do it? And what does that specifically look like in the model?

John Danner: I think the big change here is really one from point and click to conversational, right? Like, that was the eye-opener for me, really, you know, back in the ChatGPT moment was you kind of just immediately it became clear that a conversational agent would be able to kind of work through things with a student in so much better way than, you know, kind of what we all did with kind of edtech back in the day. So, you know, we all, we call it personalization, but there’s kind of a difference between a program more or less knowing where you are and what you need versus what an AI does, which is it knows everything. You know, like in Flourish, we more or less pour everything about a student into it. We have transcripts from everything students say. Like, the AI just is all-knowing about what’s happened with that student at the school. And so when it’s personalizing, it’s 100 or 1,000 times deeper level than like this basic categorization that edtech used to be able to do. So I think it’s much more aware of what students need. And I just think the mechanism of talking to a student conversationally is so much better than kind of navigating through a bunch of screens and the stuff we used to do.

Diane Tavenner: So I’m assuming then you’re building your own. It sounds like you’re building, you called it curriculum, but like that tier 1, because I have yet to see sort of off-the-shelf products that are really, that I would be like, yeah, they’re great. They can do the tier 1 instruction. Talk about what you’re building, what that looks like for middle school kids, you know.

John Danner: Yeah, right. And remember, we’re 6 months old, so anything I tell you is like total work in progress. But, you know, we’ve got good people and we’re working pretty hard on it. So the, you know, the fundamental idea, so I’ll tell you where we started with this and then kind of where we are now. We kind of had this idea that we’d have an agent on our side that was very good at sending kids to the right place to get the right help, right? So kind of like a hybrid between the old ed tech world and kind of this AI-driven world. And we pretty quickly discovered the kind of things that we had discovered at Rocketship, or I’m sure you did at Summit, which is there’s so much friction and stuff involved in manipulating another program. It’s like basically not worth it. And so that probably took a couple months for us to just realize like this is a waste of time.

Tutoring via Adaptive Dialogue

John Danner: And so really the way our system works today is as a student, I’ll tell you today and then where we hope to be in 2 months. So today, the way it works is that we have kind of a pre-assessment where we’re looking for what a student knows. Based on what they know, they enter a conversation with our AI. We often will have a 1 or 2 minute video of like just what that thing is, kind of an old edtech type thing, right? Just because I think a framing is often helpful for a new concept, but that the majority of the real instruction is kind of this dialogue between the AI and the student on like, OK, well, let’s talk about, you know, two-digit addition just for lack of anything better. Here’s a problem, you know, solve this problem for me, tell me how you’re doing it. And then basically just digging in as the student doesn’t get it. And it’s so easy to prompt for, I mean, you know, Zeal, my third company, the math tutoring company, we had figured out all the misconceptions that every student has in math. And so when you prompt an AI with that, OK, here are the 10 likely things that a student’s going to do wrong, when they’re doing two-digit math, it just goes, oh, OK, that’s it, and then it goes deep there, right? So if you think about it, it’s very fluid.

It’s very much what a human tutor would do in that case. They’re kind of responding in real time to what that student’s doing and going, oh geez, you don’t really understand how to carry the tens place, so let’s go deeper there or whatever. So that interaction with the AI happens, and then we go out and post-assess. And so the student’s kind of manipulating where they want to go and what they want to do through that process. Where we’re going, where I hope to be in a couple months, is that that’s all, all the pre- and post-assessment is kind of gone. We’re finding that the AI through that dialogue has just as good an understanding of what that student is capable of doing as kind of any formal assessment process. And it’s much more natural to just have the students sit down with the AI, you know, when they start and talk about what they want to work on. And then, you know, kind of the AI drills into that and shows them a video and does things like that.

So I think it could feel quite a bit like, you know, a student showing up at a tutoring center and that tutor kind of just working with them. It feels like that’s going to work. But that’s where we’re at with it.

Diane Tavenner: Is that voice or are they typing or both?

John Danner: We’re doing typing now. We’d love to do voice. We started there and we really worked hard on it. I would say that the biggest problem with voice for us is that we have never figured out the kind of noisy classroom problem. Very hopeful that somebody does because of the issue, you know, even if you’re off in a corner of a classroom or even outside in the hallway, the AI hears everything. And so it you know, and if you think about it, like when you’re in one of these sessions, the AI hears something and somehow inserts that in the conversation. That’s just weird. It kind of ruins the whole flow.

So it’s easier with middle schoolers to do kind of a text-based one right now. But I, you know, what I’ve told the team is I think the main interface for AI will probably be audio at some point. Like it’s just the most natural way. And so as the industry kind of builds better and better models for that, I hope that this problem gets solved and we can go to audio.

Diane Tavenner: That makes sense to me. And do you then have a knowledge graph underneath that? So even though the students sort of like flowing where it makes sense to them, at the end of the day, you have kind of the macro plan of where you want them to go.

John Danner: And yeah, so we built a super elaborate one for Zeal and unfortunately are more or less rebuilding it now for all of our stuff. Yeah, I think that’s right. I mean, as you guys know, the real challenge with AI is often that it’s so good in the moment at these things, but you kind of have to bring it back to reality sometimes. And so, you know, having a prompt that says, hey, pull the knowledge graph and see what’s the most important thing to work on is helpful. It’s kind of like this, you know, savant type tutor that can help a kid in the moment with anything, but kind of loses the picture of like what’s the most important thing to do. So you kind of have to bring it back.

And I think the knowledge is the way to do that.

Diane Tavenner: John, how does this connect with, I know you’re very committed to project-based learning and sort of that approach, which you know that I am as well. And, you know, it sounds a little bit like what you’re describing. You know, at Summit Learning, we have the playlists where you were doing the content knowledge. What you’re describing, I think, is a stronger version of that and what AI can do. How are you connecting it to the projects? What’s the intersection there? What’s going on there? And are you using AI in the projects?

John Danner: Yeah, the answer to the second is definitely yes. And let’s talk about that in a second. So we have a theory as a, as a school system, that’s probably the opposite, at least the opposite of like my alma mater. I’ve been talking to Bellarmine. It’s my alma mater in San Jose, talking to teachers about that. And, you know, AI is a problem for a lot of schools and teachers, right? Like it’s the cheating and stuff like that. We have basically the opposite approach, which is like, assume any kid can use anything that will help them read, write, understand, research better, and then like uplevel what you’re teaching so that you assume that yes, everybody’s writing is going to be perfect now. Don’t worry about that.

That’s not your job anymore. So with projects, you know, the link really is when you’re in a project, you’re trying to apply knowledge to build something to do something. And it’s extremely common to not understand something well enough to do that well. And so you need to go off and kind of research and understand it. So the link that will exist that doesn’t exist yet, which I’d like to see, is foundations lives in its own block right now at Flourish, but we’d like foundations to be accessible kind of basically all the time for students so that that’s the main way that you research as well through kind of an AI interface. So that’s the ideal. Right now what happens is that a student kind of struggles, they go off and use Gemini or something for things. And then we know, you know, the AI knows because it’s paying attention to the project and what’s going on.

‘Oh, this student struggled with this,’ and then in Foundation that kind of bubbles to the top the next day. But like, why wait? Like, just make it real time. If a student’s struggling with something, just go ahead and do it. We do have to figure out kind of the, you know, the tier 1 versus tier 2 of this. Like, if a student’s really struggling and they’ve got a real issue and you just wipe out project time doing that, that doesn’t feel right either. So we’re gonna have to figure out like what level of intervention happens if, you know, they’re still not getting it. But certainly at least the tier 1, like, oh, I just don’t know about this, let’s learn more, should happen through that Foundation system, we think.

Diane Tavenner: That makes sense. Yeah, that makes sense to me. Tell me about what the educator is doing in these times.

John Danner: Yeah, I mean, I think that’s the most important thing really is And I know for many, many teachers, the concern is, gosh, well, maybe you just don’t need me anymore or something. And that’s just completely not true. I mean, I noticed this at Rocketship, you know, people go into teaching because they love kids. That’s like, you know, that’s the common thing that you always hear. Some people go into teaching because they want to be content experts, but not that many, at least at kind of elementary and middle, like, it’s still really driven by like, I really wanna connect with kids and be with kids, not like I wanna be the best reading teacher or whatever. And so, you know, when you kind of push a lot of this like content knowledge and instruction to AI, what really happens is a little bit of like what I was describing with tier 2 and tier 3 during that time where a teacher now has a lot of time. So, you know, a lot of the stuff is going on. Project-based learning is nice that way.

Building Teacher-Student Connections

John Danner: Kids are working on things, which feels kind of like a big Montessori classroom or whatever, where like everybody’s being industrious and getting things done. But like, you know, the question is always, OK, so like what’s the best and highest use for the teacher at that point? So I think, you know, our opinion in general is kind of building trusted relationships is the most important thing you can do as a teacher, right? Like anytime you think about teachers that affected you, it’s because for whatever reason they spent the extra time to kind of get to know you, understand what you were going through, and like became kind of a trusted friend and advisor. And I think buying time back to allow teachers to do more of that is by far the highest value. Of course, interventions and things like that are awesome. Having students reach to do higher-order thinking once they’ve finished a project, all that’s great, but I think it’s all in kind of service of making that connection between our teacher and our students such that the student is more excited and interested to, you know, learn and think with that teacher about other things, you know, especially with superpowers and passions and things like that. Like, we have it, I’ll just brief aside, you know, we have these report cards that have superpowers on them. And so they say things like, you know, organization or self-awareness or whatever. So you can imagine our parent-teacher conferences are pretty amazing because while a parent is like, yeah, I don’t really know much about middle school math and frankly don’t care that much.

Boy, when you bring up self-awareness or something like that, they can go on for a long time. And so you have these really deep discussions about these kinds of things and kids by middle school, certainly in high school, they’re not really listening to their parents about these things very much. They’re kind of sick of hearing this. So I really do think schools have a way better chance of kind of influencing how children are doing these things, especially around superpowers and passions. But that requires trust and trust, you know, it’s hard to build. So we think that the best thing for teachers to be doing is kind of like getting into deeper conversations with students and talking to them about like, you know, what their interests are, what they like. And building that in the hope that they have influence over that student’s trajectory.

Michael Horn: Well, so, John, I think this actually is perfect translation into the other thing that AI is doing to free up teacher time for that, which is, as I understand it, at least from, from what you’ve written, is that you have this AI coach that is quite involved in the project-based learning piece of this equation. And I think two distinct ways. So, maybe talk about that.

John Danner: Yeah, I mean, again, work in progress, so I’m not super happy with how it’s being involved right now, but I’ll tell you what I want it to be doing well. So I think that, you know, and Diane, you live this, that the real challenge with project-based learning is there’s kind of like this huge amount of really mechanical stuff that happens in project-based learning, whereas students are confused about what they’re doing, or they’re tired and not motivated, or whatever, and you watch project-based classrooms and like actually like 80% of the teacher time is like walking around doing that stuff where they’re like, come on, Joey, let’s get going, you know, blah, blah, blah. Which of course there will still be some of that, but to what extent can you create a really awesome thought partner that kind of does a lot of those things? Like, hey, Joey, you know, what we need to focus on here is this. Have you thought about, like, you know, kind of re-engaging the way a good teacher does. Because if you can free them of a bunch of that kind of, you know, really mechanical time, I think not only does it free time, it also like kind of frees your mind up as a teacher to kind of think deeper and like look for relationships and, you know, these kind of things that we really want teachers to do. So I think that’s a big piece of what we’re hoping that this coach does. The other thing it really does for us, and you asked about this before as well, Diane, is it listens. So we’ve got mics all over the place, students are talking, it’s all anonymized, but basically the system knows what bucket to throw all the comments that students are making, etc.

Teaching Soft Skills

John Danner: And when you think about like superpowers, these soft skills. One of the other difficult things in that kind of curriculum and approach is like, and you see it in kind of SEL-type schools all the time, it kind of devolves into like playtime sometimes where it’s not as rigorous. And what AI can really do there is by looking for evidence of, you know, perseverance, for example, when did the student show that they didn’t just stop, they kind of asked the next question and kept going? Like when the AI can provide those examples in each student’s kind of superpowers report card of those things and the teacher can review it, that is so helpful because, you know, when it comes to like pushing for students to improve in these areas. Teachers really have to know, like, kind of where everybody is, where is John on these different skills, where should I focus. And so helping to provide data so that teachers can do that is, is really, really important. I would say it’s pretty good. Like, here’s one thing that kind of surprised me, we did this like a month and a half ago, the AI assessing these, we have 24 of these superpowers across all the students in the school. And we did the AI-rated students on a scale of 1 to 5, and then 3 teachers rated those same students.

And it was only off from kind of the lead teacher by about 10%. So like you know, that to me, that’s like, it’s close enough. It’s kind of like stuff where it’s like, you’re probably right, like a super expert teacher can absolutely do a little bit better. But like, we kind of want to get it to the point where the teacher’s like, yeah, you know, I pretty much trust this. I’ll look at the evidence, but more or less, it says that, OK, like, what should I do about that?

Diane Tavenner: And John, that assessment from the AI was just sort of that natural capture of all they’re doing and assessing based on, yeah, to me, like, then assessment is a no-brainer. That should, I think it’s a conflict of interest for teachers to be assessing, quite frankly, but that’s another conversation. But,.

John Danner: I mean, the other point here, right, is that when you do assessment that way, I think it’s both more valid and stops taking classroom time, right? It just happens naturally. And that’s how it happens in the real world too. It’s not like you sit down and.

Michael Horn: You go, right, we don’t stop and say, now here’s your time.

John Danner: You don’t give somebody a 5-question assessment. 6 months or so. It’s crazy.

Diane Tavenner: Yeah, yeah. So, can I just play back to you what I think you’re just, saying, just to make sure I’m getting a real picture of what’s happening or what you are moving towards happening? And you’ve only been at it for 6 months, but you’re making pretty quick progress, it sounds like. So this, like, if I’m a student in my project time, and we all know this happens a lot, there’s some kids who, like, literally, you know, the teacher’s bumblebeeing around, and every time the teacher bumblebees around, maybe I’m productive for that moment, but then the teacher bumblebees away, and then I’m kind of playing or I’m whatever. But AI knows what I’m doing in those in-between times, and so I’m getting some sort of feed or feedback of some sort, and the teacher’s seeing it, my family’s maybe seeing it, of like, hey, this is what’s going on in your time, and so we’re going to hold the mirror up, give you some feedback, tell you like, this is the stuff you could be doing to be more productive. Is that kind of what you’re describing? And If so,

John Danner: Yeah, we’re all going to have that. So this is another thing, like one of the things we think about a lot at Flourish is like, is this different than the real world’s going to be or the same? And I think we all basically need that. Like, you know, if you had a voice that was kind of going like, John, what are you doing? You’ve been doom scrolling. You know, like it’d be pretty helpful, really.

Diane Tavenner: Well, one of the big conversations is about motivation, right? And like, oh, you can’t, you have to like motivate kids to use the technology to learn. But actually, I think you’re flipping the script here and saying like, no, the technology is like literally helping, young people be motivated because someone’s paying attention and they’re noticing what they’re doing and they’re giving them feedback on it. And you know,

Feedback and Rewards Drive Success

John Danner: The feedback thing is the important thing. It’s like basically if something’s giving you feedback, like even if the feedback’s not perfect, it’s so much better than not getting feedback. You know, like the classroom where everybody’s got their hand up and they’re just waiting for the teacher to call. Like that’s a bad place to be. So now you’ve basically got this continuous loop. The other thing I would say that I think is just almost for free in this world is, you know, the gaming world has figured out a lot of things that they do when you’re doing a pretty basic task to play the game, and you might not be that excited about it, but like, you know, they’re setting up rewards. We use badges, um, you know, so like an example is you might do 2 or 3 different projects, and by doing those 2 or 3 different projects that was built up to a badge. And so the badge is kind of hanging out there and some other student in the class got it.

And so you want it and things like that. And, and those like really kind of basic game things are very helpful at different times during the day, right? Like we kind of all need a little bit of push. We’re very conscious of intrinsic versus extrinsic. motivation. And so like projects are a good example where the default is intrinsic. We want students to be kind of working on that project because they’re interested in that, because they want to do it. But there are definitely times where the AI is paying attention and kind of prompting and even, you know, doing some rewarding and things like that is actually quite helpful for them to kind of persevere.

Diane Tavenner: John, I want to talk to you about, I think you’re the perfect person to talk to about this. So one of the things I hear out there a lot is like, oh, the hyperscalers are just going to build this. Like, number one. Number two, most schools and school systems have zero ability to actually build what you’re building. So you’re sort of this unique person because you sit at the intersection of like opening, operating schools and the ability to build sophisticated technology. Is that, are the hyperscalers going to build what you’re building? Like, are you, like, how do you think about the building of the technology here for schools?

John Danner: Yeah, I mean, we’d be pretty happy if the hyperscalers built it, first of all. We’re, you know, so I think that the main challenge over the next 20 years in education is going to be how quickly do we move to a world where students are living in the current world as opposed to the, you know, 20 years ago or whatever. Like, and, and so these basic things we’re doing like foundations, I think it’s important for students to live in that world now. And so what does it take school systems to move towards that world? I know that your approach at Summit, our approach at Rocketship in the beginnings of the edtech world were, hey, let’s just build these kind of basic model schools and hopefully people will come visit and go, oh gosh, you know, that doesn’t look too bad. Like I could probably do that as well. So I think a lot of the point of Flourish is creating this proof point where people can come and see and go, huh, that, that actually works well, and it’s definitely not dehumanizing. I see the teacher interactions with the students as being more human, um, than my classroom. So I think that’s like actually our point, our reason for being is to kind of be that model.

And, you know, we’ll build a network and we’ll get as big as we can, but, but really kind of purposefully influencing school leaders, district leaders, state leaders to think about, like, you know, what they could do as well. On the technology side, I’m generally of the opinion that a lot of this will get easier and easier for everybody who’s not at the foundation level over time. I will say, like, there are some exceptions to that. So, like, with Project Read, with phonemes and graphemes. When you’re doing kind of deeper reading stuff, they may get there. I mean, the AIs may know everything at some point, but like there’s not a super strong reason for them to get there earlier. So there are pockets like that that probably will be specialized for longer. But, you know, as a school, it’s just better for us the faster all of that becomes a commodity.

And the more we can just, you know, get off-the-shelf stuff, like there’s no real joy in building all of this stuff. And for the change to happen, we don’t want people to have to think about all this stuff, really.

Diane Tavenner: No, I have to ask about scale because your point that the faster we can get kids to be living in today’s world versus the old world suggests that we need to scale as quickly as possible for that to happen, to get as many kids there. You and I both bear a lot of scars around different efforts to scale both mortar schools and influence type things. This time you’ve gone with a microschool network. What’s your, you had grand ambitions with Rocketship and clearly Rocketship’s great and Preston’s done an amazing job since you left, but it never reached sort of the scale that I think you originally hoped. What is your thinking now? Why microschools?

John Danner: Yeah, I mean, you know, putting it like just putting it bluntly, I think politics killed charter schools more or less. Like, you know, you look at most high-performing charter schools, they tend to look more and more like the districts that host them. You know, they actually, like, I look at RocketShips around the country. They actually look as much like the district they’re hosted by as they look like RocketShips sometimes. You know, it’s like, ’cause you know, your authorizer authorizes you and they have a lot of influence. So it was kind of like this cool experiment that at the beginning probably created a lot of innovation and then over time kind of has this like bringing it back to the, you know, kind of what the districts are doing. I think that microschools, certainly microschools, are starting in a very different place, you know, where the way I think about charters is the compromise happened right at the beginning. Where we would like to receive public funding and for that we will like to fit into the system.

Whereas the microschool movement kind of started with a different point where the stronger position was taken early on when the laws were formed that like these things are independent. They’re way more like private schools than they are like district schools. And of course, there will be some influence from states and others on that, but nowhere near like, you know, what we saw in the charter world where it was like, you know, I remember the story I always tell is Rocketship had specialized teachers for math and reading in elementary school, which was not normal at all. And I was just tortured for years by districts over this. You know, the main thing was like, no, it’s, you know, a student needs one trusted adult, you know, when they’re that age. And if they have two, it’s going to like, you know, all fall apart, which was, of course, total bogusness. But I had to go through that anyway. Like, you know, that was just time of my life spent arguing something silly.

Whereas with microschools, you just don’t have to argue that. So I think the big question is, what will be the ultimate, like, kind of political destiny of microschools? Will they get capped in the way that charters did? Will they somehow kind of get influenced in a way they aren’t now? Right now they’re pretty great. I mean, you know, you basically build a school that parents and students love and, and you build the curriculum and the program you want. That’s nice. Something you would have enjoyed, Diane.

Reimagining Teachers’ Roles

Diane Tavenner: Yeah, no, I mean, it’s tempting. I will say Michael’s always so kind because when we start talking schools, I just take over. So he’s being so patient. The thing that’s coming to me, and maybe this will lead us to wrap up, is, you know, you and I both taught, and were passionate about teaching. And as you start talking about politics, one of the sort of sad elements of that politics to me is I think teachers get involved in kind of, or, you know, blocking some of these changes, a lot out of fear, a lot of out of like but my identity is teaching a classroom of students and writing great curriculum and like doing all, you know, being a hero. And I think what you’re offering is a new identity for a teacher that might actually be more aligned with why they got into it in the beginning, which is instead of judging myself by the quality of my classroom instruction, I’m like literally focused on every single kid learning and growing and, you know, in your words, flourishing, right? It’s such a profound

John Danner: In general, I think that professions that go in the direction of being more human, where the human elements are like the differentiator, they’re going to do so much better. So I, you know, wrote a piece on this. I just think, you know, while most parents would not have counseled their kids to become teachers in the last 20 years, I think that conversation is likely to change because I think it’s going to be both a more enjoyable job and probably more resilient to kind of the whole AI apocalypse than most jobs.

Michael Horn: Agreed.

John Danner: Yeah.

Michael Horn: I think that is a good place to part us. But John, I feel like we have like 10 other questions like sitting in our dock that we could have dug in with you. But let’s pivot. This is fascinating. It’s really cool to see what you’re building and hear both the frustrations, but also frankly, the North Star for where it’s going. And one day maybe Massachusetts will have you here. But I’ll pray for now. But let’s pivot.

This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question. What will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving? Learner Studio is sponsoring this season on AI in Education. Because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learners Studio鈥檚 mission and the innovators who inspire them at www.learnerstudio.org.

We have this section that we always talk about things we’re reading, watching, listening to. We try to do outside of work. People track us on this stuff. Diane and I occasionally fail. I’m going to fail today. So you can go wherever you want.

John Danner: So, yeah. I’m rereading the Culture series, Iain Banks, right now. So my brother works for Tesla and Tesla just, as you probably heard, kind of made this transition where they knocked off the Model S and Model X and are building robots. So he’s building robots right now. So that makes it much more personal to me that like the future is coming soon, and so, you know, I’ve always been a science fiction reader, but, but I think one of the cheat codes in Silicon Valley is like the amount of science fiction consumed equals your ability to be comfortable with like what’s coming. So yeah, culture series.

Michael Horn: Good rec, good rec.

Diane, what’s on your list? You said you’re cheating.

Diane Tavenner: So, I’m cheating, I’m failing today. Sorry. Ted Dintersmith has his latest book out and sent it along. I couldn’t resist. The title is very provocative. It’s called Aftermath: The Life-Changing Math That Schools Won’t Teach You. And, you know, this is really, you know, for those who don’t remember, Ted, like, goes hard on the things we’re doing wrong and really tries to bring public awareness to them. And, I think lots of us have been concerned about how math is taught and not taught and whatnot for a long time.

So, that’s what this one’s about.

Michael Horn: I have an email from him in my inbox to send him my address, so I will do it after this conversation, uh, so he could send it to me as well. But, I’m also cheating. I’ve been really interested in, not just how schools start doing new things, but how do they stop doing old things? Like, they are just really bad. And it’s not just schools, by the way. Like, all organizations are really bad at deimplementing or pruning, like, old things that don’t make sense anymore, whether they’re bad habits or frankly habits that just aren’t fit for the current age. So I’ve started, like, trying to read some of the academic literature and just learn about that. And there’s a book, Making Room for Impact: A Deimplementation Guide for Educators, by Aaron Hamilton, John Hattie, and Dylan William. And so I’m just cresting the end of that book right now, and, and then looking at all the healthcare studies that they’re citing.

And I haven’t decided if I’m going to read those, but that’s where I am right now.

Diane Tavenner: So is it a recommend, Michael, or no?

Michael Horn: I mean, it’s, it’s like a, it’s a deep workbook, right, on the topic, um, is what I would say. So like, if you’re a school and you’re trying to work through this, definitely dive into it. I was more interested in like, who’s, who’s thought about, like, how do you de-implement? How do you prune, right? And because there’s just not a lot of conversation except for educators griping about it. And so I wanted to learn more and it was a good starting point. So huge thanks, John, again for joining us. We appreciate it. Really check out his Substack as well if you want to just sort of follow along on the journey, I guess is what I would say. And we’ll watch as Flourish opens two more in Arizona in August and keep up the good work.

We appreciate you. And for all of our listeners, keep the emails, notes coming. We love it. We learn a lot from it as well, and it inspires us on our future topics. And so, as always, thanks for joining us on Class Disrupted. We’ll see you next time.

This episode is sponsored by LearnerStudio.

]]>
How Alpha School Uses AI to Rethink the Education Experience /article/how-alpha-uses-ai-to-rethink-the-school-experience/ Fri, 06 Mar 2026 13:30:00 +0000 /?post_type=article&p=1029467 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

The private, AI-powered Alpha School had quickly generated attention in the education world and beyond. The school鈥檚 been featured in dozens of articles and dissected across countless podcasts for what leaders call their 鈥渢wo-hour learning鈥 model.

On this episode of Class Disrupted, MacKenzie Price, co-founder of the Alpha School, joins Michael Horn and Diane Tavenner not to explain Alpha School鈥檚 model, but instead to dive deep into how the school is leveraging artificial intelligence to radically rethink the school experience. Price focuses on how AI itself is being leveraged at Alpha 鈥 from the core academic blocks to afternoons spent on real-world projects and life skills development. What鈥檚 possible now in school design that wasn鈥檛 a decade earlier, thanks to AI? 

Listen to the episode below. A full transcript follows.

Chris Hein: So when the school shut down and went to remote learning, we were really fascinated by how quickly our kids adjusted to e-learning and how hard of a time the teachers seem to have with just the basic tools and systems and then how to translate their curriculum to a digital format. But the thing that really jumped at me was my wife and I were having conversations with our kids every day saying, hey, what are you doing?

Why are you guys playing video games? Or why do you, like, want to go outside and play? It’s midway through the day and they’re like, we’ve already done our work. And we were like, that can’t be right. And so we double checked their assignments and their tests and where they’re at. And it was like, no, they got all their work done in a couple hours. And then it really made Teresa and I question, why does it take them eight hours a day at school if the school is teaching them the same content and administering the same number of tests and they’re able to get through it in a few hours?

Michael Horn: That was June 2020, and Diane and I were broadcasting during the height of the pandemic, and we were hoping that parents would realize that schools could be rethought dramatically, including by helping people realize that what we tend to think of as, quote, the academics could be done in much, much less time than the six plus hours that kids spend in traditional schools. Five years later, and thanks to a startup school network, Alpha School, the two hour message finally seems to be spreading like wildfire. So with that as a prelude, Diane, first, it is great to see you as always.

Diane Tavenner: It’s good to see you too, Michael. I’m a little disoriented by us changing up our normal intro. But in a good way, change is always good. That take from season one is honestly priceless. It’s taken us a bit longer than we had hoped, but we do seem to be getting some momentum towards some of the big opportunities that we saw in education back then and still are hopeful for now.

Michael Horn: Yeah, no, I think that’s right. And I’m glad you’re accommodating my whims on changing the format up on you today. But I am particularly excited because we have on our show today MacKenzie Price. She’s of course one of the co-founders of Alpha School, and MacKenzie’s been on my Future of Education podcast and Substack before and we actually both have Substacks named the Future of Education. We independently named them, so we’re vibing already. But MacKenzie, it’s great to see you again, welcome.

Scaling Education with Technology

MacKenzie Price: Well, thanks for having me. And, you know, it’s so interesting that you tell that story about the way, you know, education was done during COVID And we were pretty lucky because we’d started Alpha school back in 2014. So when the pandemic hit, you know, it happened to be during spring break. So the kids who hadn’t brought their laptops home came and picked them up at school. And we really had a very smooth rest of the school year because the kids already were doing their learning on the computers. And then we just said, you know, afternoons, we’ll just, we’ll, we’ll call it, you know, do whatever you want at home. But what’s interesting is a couple years ago or in 2022, when we really launched our learning platform with the advent of generative AI, we realized, okay, we can actually scale this. We can go beyond just, you know, a local school that’s doing a reasonable job of educating kids, and we can, we can scale it bigger.

And we were originally talking about the idea of 2x learning. You know, you can learn twice as much, you can learn twice as much. And even our own families were like, we don’t, we don’t care. Like, why does my kid need to learn twice as much? It’s not a big deal. And we, we’d have like, parent conferences where we’d be saying, hey, if, if your son, you know, hits his, his goals, he can be learning twice as much. And they didn’t care. And then we had this unlock idea of let’s call it two hour learning and say, hey, if your son hits his goals, he can be out of here in two hours and freed up to go do the rest of the things, you know, that he wants to do during the day. And suddenly the parents are like, Johnny, come on, get with it.

Let’s hit our goals. And it was that mind shift of, you know, let’s get your academics done in two hours. And as a side note, you’ll learn twice as much, but let’s do that for two hours. And then one of the code names we actually had for our learning platform was 鈥淭ime Back.鈥 And we went through a whole process in the last year trying to make sure, what’s our new name going to be? What are we going to call this? And ultimately we landed back on exactly what it is that we’re giving kids, which is time back to go do all these other exciting, interesting things during the rest of the day. Because it doesn’t take all day to educate kids. You can not just do academics, but crush academics in a much shorter period of time when you’ve got this personalized mastery-based tutoring.

Transforming Education Models

Well, and I think you’re speaking to, like, there’s many reasons why Alpha has done what many education startups struggle with, which is jumping into the mainstream narrative. And that sense of giving kids back their most precious resource, time is clearly part of it. AI is another part of it. And that’s where we want to dig in with you today, just given the focus of the podcast that we’ve had here. But let me perhaps frame it this way. We now have two school founders on this show, you and Diane, who have each created models that at one level I think look awfully similar in certain respects. If you mix in, say, Rocketship Education or something like that, which was founded in 2006 and is an elementary school model.

Michael Horn: We can take that and Summit Public Schools that Diane founded and Rocketship and say, hey, a lot of the structures that Alpha Schools has at one level, like a relatively limited block of time on learning academics and content in ways that are personalized for the learners, large blocks of time for projects, a big focus on skill development and habits of success or life skills like growth, mindset, agency, and so forth, those are things that were present in models like that. But then we come to at least one big difference, which is, yes, Alpha was originally designed, as you said, right before the mainstream use of AI, just like Summit and Rocketship were. But Alpha is now aggressively developing AI powered dashboards, AI powered learning applications, AI powered knowledge interest, working memory graphs for students. And so, given our focus on the podcast in this particular season around AI, I just love to dive into the AI parts of the model with you. Even as we’ll say up front, like AI is clearly inextricably linked to the other elements of the overall Alpha model. Pulling them apart is not fair to you all. But just given that we’ve heard so many podcasts with you about Alpha, and we suspect most of our particular listeners have as well, I think digging into that AI question in particular, and this is maybe the framing we can bring to it, which is, what does AI allow us to do today? That was not possible in the best of the personalized models from a decade or two earlier.

MacKenzie Price: Yeah, I think that’s a great way to frame it, because artificial intelligence in the learning science world now is what I believe is like the microscope to biology. It is the tool that is finally enabling us to integrate all of these learning science principles that have been known for many, many years can result in kids learning 2, 5, 10 times faster. It just was never possible to incorporate in obviously in a teacher in front of the classroom model, but even more importantly, even in an individualized adaptive app type setting. And so to give context to that, you know, when we first started our school back in 2014, we knew that we could use apps. So we were using things like Dreambox and Khan Academy and Freckle and Grammarly and Egump, a lot of the apps that were kind of out there. The difference was it was still hard to manage the way that kids worked through the apps. And so one of the things we found is that there’s a lot of what we call anti-patterns that kids will do when they’re using apps. It could be things like topic shopping.

You know, they jump in and say, hey, I’m going to go to, you know, I’m a fourth grader, but I’m going to try some fifth grade material just because it’s kind of interesting. Oops, it got hard. I’m going to back out of that. I’m going to jump into some third grade material or I’m going to kind of mess around on this or even more just not engaging with the apps. You know, you could have everything from a kid not even sitting in front of his computer or picking his nose or, you know, just rushing through the explanation and not reading it. And that’s where a lot of the big difference is. One thing to kind of just be clear about, we do not use a chatbot in our education platform. Chatbots in education are cheat bots.

And it was interesting. I actually had a big event last week in Austin. The National Governors Association came and toured and we’re learning all about our schools. And I made that comment, you know, we do not use chatbots. They’re cheat bots. 90% of kids are going to use them to cheat. And a couple hours later, there was another vendor who’s basically built a chatbot for education that was like, well, you know, I put him in a, put him in a little bit of an uncomfortable situation. But I think that’s really important to know.

And one of the things I really don’t want to see in our education system is we slap a GPT on every kid’s computer and suddenly say we’re an AI first classroom. Right? And I was actually talking to a Stanford professor a few weeks ago who said, you know, here’s the problem that we’re seeing. Educators are using, you know, chat features, ChatGPT to create lesson plans, you know, and do these things. Kids are using ChatGPTs to write their stuff. Professors or teachers are using ChatGPTs to grade it. And so basically the AI is just talking to each other. Right. And we’ve taken the human out of it and that is totally not what we’re doing.

So there’s kind of two features that I can go into around how we’re using AI in our model.

Diane Tavenner: Yeah, let’s take this piece by piece. MacKenzie, that will be that context is super helpful. Let’s start in the morning block where you’ve already gone a little bit with some of the apps and whatnot. You all roughly have about three hours where students are doing sort of two hours of head down learning that quote academics my language for that is content knowledge. So forgive me if I slip up and use different lingo. And as I understand it, and as you were just sharing, you’re using these apps or adaptive learning products and you named several for us there. But there are some places where you are using apps that, as we understand it, you’ve built for yourself. And this tracks with my summit experience.

Our first choice was always to buy quality products. Second choice was to partner with startups or companies that wanted to work with power users. And last choice was to build our own when it didn’t exist. So I’d love to unpack. Where is it that you’ve determined there wasn’t something good enough and that you have literally built your own application and are using it right now? And are those AI native applications?

AI-Powered Personalized Learning Systems

MacKenzie Price: So we’ve definitely had a number of years to test out a lot of different apps, see what worked well, what didn’t work, where there are gaps. And what I would say is we’ve curated over this period of time which apps are best for which grade levels in which subjects. Not all apps are created equal, but to kind of start at the very beginning where we’re using AI, we are using AI to be able to assess what a student knows and what they don’t know. So any student who comes into our Alpha school to start takes an NWEA math assessment. We also do math assessments three times a year for all students and that’s how we’re measuring growth. But what we do is we take the information that comes through that assessment as well as some other initial assessments that we’re able to do with students. And from there we have AI tools that will basically build out the personalized lesson plans that say, all right, here’s where a kid needs to go, here’s how we whole fill, which of course is a very common issue. Even our students who come into us with, you know, A’s on their transcripts, you know, can be three years behind in academic content.

Right. Actually we found out students who came in to us this year from other schools, if they had a B on their transcript, they were between three years behind and seven years behind. Which actually shows, you know, grades mean nothing anymore in this day and age. So we take the assessment and we have an AI tool that basically builds that out. So what does that look like?

Diane Tavenner: And that’s a tool you all have built internally, is that Time Back?

MacKenzie Price: That’s a tool that we built out. We have built that tool out and that is using standardized third party assessments like Max.

Diane Tavenner: Yeah, the results. And you’re ingesting the results on that.

MacKenzie Price: Exactly. So they build that. So the experience for a student, a student sits down in the morning during their core block of academics and they will log into a dashboard. We have a time back dashboard that a student logs into and says, okay, it’s time to do math. Now in some of our classrooms, kids get a choice of what subject they want to take on first. Other of our classrooms, you know, we have a set thing. Okay, we’re doing math first, then we do reading, you know, then we do language.

Diane Tavenner: And is that based on age?

MacKenzie Price: Depends on the age. Yeah. And, and so it’s, it’s always interesting. You know, what we’re really working on creating is self driven learners who understand their skill of learning to learn. So like if you talk to some of our fourth and fifth graders, you’ll hear some of them say, hey, I usually will choose to take on my hardest subject first when I’m fresh and I’m ready. Right in our kindergarten and first grade classrooms, you know, that’s more, okay, it’s math time, it’s reading time, you know, and it’s kind of subscribed there. But basically what will happen is a student will go into the dashboard, click on the subject that they are going to take on. So that’s math as an example.

And then the dashboard takes them to the app that has been determined is the one that is right for them and what they’re doing. Now when I say right for them, we also as a school have kind of used certain things. For example, Math Academy is a third party app that we love. We think Math Academy is amazing. They’ve been fantastic partners to work with and it works really great for basically third through high school. We were Using another app for our younger students, earlier this fall, we were using Synthesis, which, you know, that’s a sexy app that, you know, parents kind of like, because kids are doing interesting things. We were seeing, though, like, I don’t know if we’re getting the results we want.

So we’ve made changes, you know, to that, but they’ll go to the level that they need. So you’ve got a fifth grader who maybe needs to go back and revisit concepts from third grade. You know, they have to hit this fast math, you know, concept, or they’re looking at these fractions or whatever it is. So it takes them to that lesson and they’re doing that. So that’s the first use of AI that we have. Now the second use that we use is the vision model. So what’s happening is we’re using an AI tool that we have built that tracks the screen and is actually watching to understand how is a student moving through this material.

So, for example, when they are doing reading comprehension, are they rushing through the article? Are they just scrolling to the bottom of the screen and randomly guessing, or are they taking the time? And of course, you can tell this is a reading article that normally would take, you know, 69 seconds to read. And this kid just answered it within 10 seconds. Okay, now we’re realizing we’re. We have an anti pattern, which is basically an improper use of engaging with the apps. So we’re looking at that in terms of the vision model to see how kids are learning. When they get a question wrong, are they watching the video? Are they, you know, taking time to read the explanation? And then our AI tutor creates coaching for that student.

So it’ll say, hey, buddy, we’re realizing that, you know, you’re not reading the explanation when you get a question wrong. If you take this time to go forward, here’s what it would do. And so we’re basically giving coaching. Now. The other thing is, in our schools, we also have our cameras turned on and they are recording the students. So they’re seeing if you know, the.

Monitoring and Progress Tracking

MacKenzie Price: If the computer has been, you know, quiet for a minute and a half, is it because the student’s not even in front of their computer, or is it because they’re goofing around with their buddy next to them, what is it that they’re doing? And so it’s able to do that. Now our families have the ability to turn that feature off at home if their students are using that feature at home or if they’re working at home, they can turn that off. But in our schools, we do require that that be turned on. And so we’re able to kind of look at the coaching. Now students will basically walk through each of their core subjects, generally in about 25 minute Pomodoro sessions, and then they’re done with their academics in that two hours. The other feature that we’re using with our AI tool is we can really well analyze and understand how a kid is progressing through the material. You know, what percentage completion are they on each of the different apps, you know, and grade level subjects, things like that.

How many minutes do we anticipate? How many weeks will it take before they’re finished with, you know, fifth-grade math? If they put an hour of homework in a night, here’s how much shorter that will take. And one of the things that people love about that, not only do our students get to really see and understand, they have a sense of ownership over their academic journey. But of course, parents can log in, you know, every day if they want to, to be able to see what is my kid working on. What, you know, did he hit his goals? And then what. What we’re also tracking in the way that goal setting works is students are getting experience points, XP, to borrow, you know, a term from video gaming. And so the goal is that they get 120 XPs per day, which is 120 minutes of focused work. That’s one XP is equal to one minute of focus work.

And so that’s what we’re working on. And then when you ask about the apps that we’re using, we have built Alpha Math, Alpha Read and Alpha Write are some of the apps that we’ve incorporated into our model. And then we’ve got some other things that, you know, that we’re continuing to roll out. One that’s actually available to the public for free is an app that we’ve built that helps encourage the love of reading, which of course is a difference between learning to read and learning to love to read. And that’s called teachtales.com and you can go to teachtales.com and basically it’s using AI to generate personalized reading material based on a student’s interests that then delivers at the appropriate Lexile level for them.

Diane Tavenner: Awesome. There was a lot in there. So let’s.

MacKenzie Price: There was a lot. I need to work on more short sound bites. Well, I hope that doesn’t get worse as I get older.

Diane Tavenner: We all have things we need to work on, right? Let’s stick with those three apps that you’ve developed. So Alpha math, read and write. Are you using those across all of your grade levels? And are they AI, are they adapt, are they AI native, are they adaptive? What’s going on with those apps?

MacKenzie Price: So the Alpha Write is something that we’ve been really excited about and we break this down just to have an idea of how the app works. We break this down with the idea of can you write a grammatically correct sentence, you know, then building onto paragraphs, then building on to essays and working through. And I will tell you, I mean, we had a lot of students, again, A students from their previous schools that come into Alpha. We had high school students who couldn’t write third grade level sentences, like, it’s just crazy how poorly this is going.

Diane Tavenner: Yeah, that’s one of the questions I think that comes up is where writing is situated in the model. So it sounds like you’ve got writing in the morning block as sort of a standalone kind of just expository approach to writing.

MacKenzie Price: We do have writing in the morning block now. Our students are also doing a lot of writing in the afternoon. So, you know, for example, they’re writing, you know, talks that they’re going to give for TED talks, they’re writing essays, they’re writing book reflections that are part of our afternoon block, which is our check chart time. So it is a common fallacy that people have of, oh, these students aren’t actually doing a lot of writing. They’re absolutely getting, they get a lot of writing in. But we’re really breaking this down into everything we’re kind of thinking about is what actually works when it comes to educating students. And where have we been doing it wrong? And that’s where I think it’s so exciting to see all these learning science principles that can come up. And you know, for example, here’s another thing that we do during these, the, the core block period.

Optimizing Learning

MacKenzie Price: We’re, we’re measuring what percentage accuracy students are at to understand are they in the zone of proximal development. Right. If they’re getting more than 85% of the questions right, you know, then that’s a sign that they’re, they’re in too easy material. If, you know, they’re under 70, it’s a sign this is too hard. How do you make sure that they’re staying in the right spot? And so that’s the other part that the AI tool will kind of say, whoa, hold on here. We’re noticing that there’s something changing or that a student’s not being hit at that right level. The other thing that’s going to come in to play is we’re also going to be able to really take a lot of things around cognitive load theory principles and understand, OK, if a student only needs 5 reps of a concept in order to master that concept, they shouldn’t have to sit around and do 10 reps. And if the student needs 15, they shouldn’t only get 10.

So that’s just some ideas of some of the things that are coming in the pipeline that generative AI is going to make really available.

Diane Tavenner: So two things I’m trying to understand and contrast to pre AI to now that we have AI because a lot of what you’re describing sounds very much like what Summit Learning was about. You know, we built thousands of playlists and young people, they actually had a lot of choices. So we were working on self direction in, you know, they would do a pre assessment, they would know what they know, they would prepare, you know, and study and learn. And then they would take a post assessment, we would assess all the things you’re talking about. So I guess I’m wondering in these apps, is that similar or is AI actually playing a new and different role here? And then I do want to get to the sort of time back coach as well because I realize it’s connected. But, are we using AI in these apps? Are these sort of still adaptive learning apps? Are they 鈥?

MacKenzie Price: Yeah, the third party apps that we’re using are not using, you know, an AI feature and they’re not creating dynamic content. You know that, that is created. This is, you know, The K-8 Common Core curriculum is what’s, what’s being fed into these apps. Where we are getting to is we are going to be moving in, in 26 to dynamically created content. Obviously there’s been a problem. There’s still hallucination issues. In fact, we have a group of high school students, kind of our, our top honors students who we are testing out dynamic content and they’re able to say, hey, guess what? The AI is acting up here. Like this is totally a wrong question on that.

But right now what we’re doing is we’re going through and we’re analyzing every lesson before it’s out there. So this isn’t just like an LLM creating a fifth grade curriculum. We’re still using that. Where the AI tool is really being used is around that vision model. So that’s the biggest difference is that, and that’s part of the reason, you know, if you talk to families who went to Alpha, you know, six years ago, you’ll hear a much more varied experience. Right. We had a lot of families that my kid wasn’t learning.

They were goofing around. There wasn’t this connection. Now there were a lot of reasons for that. We didn’t have the motivation model locked in. We didn’t have the high standards, just expectation. But the other big part was it’s really easy to goof around when you’re learning on these, you know, in general on these apps. And so that’s the biggest thing right now is that our AI tutor is ensuring that kids are moving efficiently at the right level and then understanding what the pace is for that and creating basically new lessons that will fill academic holes, you know, and go at their pace, is what I would say. But yeah, if you’re looking at, you know, for example, a math academy, you know, type of thing, you know, that is static content that, that kids move through and kind of work on.

We used to use IXL, actually. IXL kicked us off of their platform. They don’t like us for some reason. They literally won’t even tell us, they won’t talk to us. They just say, you’re off. But we had used IXL a lot. And actually one of the things I always say for families that are wanting to recreate this at home, I actually think IXL does a really good job across a lot of dimensions. They were a pretty good app.

They don’t like Alpha for whatever reason, but, you know, that’s where we’ve kind of been able to figure out what this is. But I think the other question is, when you talk about things like reading, writing, it’s really helping break down our apps that we built. You know, they’re breaking down into small components. Let’s make sure a student is excellent at this and then build from there. I think in a traditional classroom, having students write a five paragraph essay is not necessarily helpful. Instead, are they really understanding the structure and mechanics of a sentence? Are they understanding what a paragraph should look like? Are they going. And we use really the idea of building blocks in all of the work that we do.

Diane Tavenner: So does that mean you’ve got under underlying at least the apps you’re building sort of a knowledge graph that you’re, that you’re working with in order? Yeah, I mean that again, fairly. Okay, fairly consistent. Let’s dig into that AI coach or tutor, like you said, because it sounds like this is not a traditional dashboard where young people are looking at Their own data and information. Maybe they are. But what it sounds like you’ve really got is this AI coach or tutor coming in to keep them motivated. I mean, the apps you’re talking about, lots of schools have them, as, you know, lots of schools, they just don’t get the number of minutes, they don’t get the progress. And so is you’re. It sounds like that’s the key.

So that is an AI tutor or. But it’s not a bot that you were referencing.

MacKenzie Price: Well, it is, but you’re not correct about. Yeah, you’re not correct about that. The AI tutor is not providing the motivation levers. There’s no motivation that’s happening through the apps. The motivation is all through our guides, our human teachers. They are focused on motivation. And just to be really clear, the reason we’re having the success that we’re having and the academic results we’re having is not because of our ed tech. Our ed tech is fine, it’s whatever.

But there is no magical edtech product that just immediately motivates and makes a guide or makes a student, you know, lock in and be able to learn well, We haven’t built it. We haven’t seen it yet. The key for us is that we have freed up the time of our human adults to be able to focus on motivation. And so that could be everything from, well, from the idea that students earn alpha bucks for hitting their XP goals to, I was just talking to one of our kindergarten guides the other day, and she said, you know, we have kids where when they hit, one of their goals, when they. When they unlock a goal that they.

They’ve done, they have a secret sniggle, they have a secret signal, they’ll, you know, scratch their nose. And that signals, oh, you hit a goal, let’s do a silent dance party. And It’ll be a 15 second, you know, the guide is doing the silent dance party, and then they move on to the next thing. It can be individual motivation, you know, models. We had a student who, as a result of hitting her academic goals over a period of six weeks, she earned time in a professional recording studio to record an original song that she had written and was singing. So that’s the whole key. And by the way, 90% of what creates a great learner is a motivated student.

10% is having the right level and pace, which is what our edtech tool does. What the AI tutor does, though, it actually does give kids the ability to go on their dashboard and each day and see, okay, I hit my rings, I filled my ring. It kind of looks almost think of an Apple watch, you know, with exercise rings. That’s what it is for each student is, did you fill your ring? Which means, did you get your XPs in that subject? And then they can go into their learning dashboard and they can see at any time, here’s how much I. Here’s how much I hit. We even have a waste meter in the corner that says, you know, you’ve wasted 20% of your time you were wasting by not engaging in the right way or not accurately doing that.

Diane Tavenner: So the student doesn’t actually, like, engage with the AI tutor. It literally is just powering this dashboard then.

MacKenzie Price: Well, it’s powering the dashboard, and then it will pop up and say, you know, it’ll write something like, hey, watch the video explanation. You know, sometimes it’s, you know, going.

Diane Tavenner: It was like a nudge or something.

MacKenzie Price: One of the things that, yeah, we’ll see is that, you know, we’ll often say to students, you know, often the fastest way forward is to slow down, slow down and read the explanation. So it does that. But here’s what it’s not doing. There’s not some little avatar Dashy, that pops up and is like, hey, Johnny, you’re doing such a great job. Two more questions, and then we’re doing that. It’s not that kind of thing. The AI really is kind of under cover.

And it’s again, building these lesson plans and then analyzing and understanding how a kid is moving through that.

Diane Tavenner: Building the lesson plans that are in the apps or in the …

MacKenzie Price: Yeah, taking them to the right spot. So it’s able to say, okay, we’re going to take you.

Diane Tavenner: Oh, by lesson plan, you’re saying directing them to specific.

MacKenzie Price: Directing directly to this math academy. And we put up these basically guardrails. That don’t allow a kid to pop out of Math Academy and say, hey, instead of doing this concept, I’m going to go play over here. I’m going to go do this. And I think that’s a problem in traditional classrooms when people are using apps. They’re given their iPad or their Chromebook, they’re put on Khan Academy, and then they’ve got the ability to kind of bounce around. There’s one other topic that I think is also important, and this is actually a lesson we learned very early on, is the idea of requiring students to do some work each day in each subject. Right.

And there’s a lot of alternative education systems that’ll say, hey, if a kid doesn’t really want to focus on math for a couple months, that’s okay. They want to pursue reading. We actually believe. And this was, I’ll never forget the very first year we had a first grade student who absolutely loved math. Loved math. He was at 8th grade level math. And the problem was he needed his guide to read the word problems to him because he couldn’t read and he hadn’t read in like months. And that was one of the early unlocks where we realized, okay, we have to require, you know, time in each subject each day that students are accomplishing, which some, again, some alternative schools don’t do that.

Diane Tavenner: Yeah. So it sounds like then, the motivation is highly related to this relationship that young people have, which we know is very powerful. And then just following the directives essentially of the guide and then the technology to do what you’re telling them to do and stay on track.

Confidence Unlocks Student Motivation

MacKenzie Price: Exactly. And then I think the next part of the motivation, kind of the deeper level of motivation is and you know, people often go, oh, is extrinsic motivation bad? And you guys know, there’s all the research that shows there’s not necessarily even that same, you know, intrinsic versus extrinsic. But what we are seeing is that as students become more and more capable, you know, and build up their knowledge, they become more confident and they do get more motivated. They suddenly realize like, wow, okay, I can be 99th percentile in, you know, math, in language, in science, I can do this, it’s not as hard. And so we find that kids, their identity really changes as they start to see that, wow, I’m capable of learning when I’m given the right level and the right pacing and I get motivated to do that. And that is what I think is the really cool unlock that we enjoy seeing when students finally realize this. Like, wow, I can do this.

Diane Tavenner: Yeah, definitely. You said that one of the benefits of this approach is you freeing up the guide time to really do the more important things. And as I understand it, one of those activities they do is one to one meetings with the young people in this morning block. This was one of them. Continues to be, I think the most highly rated element of the summit model is the mentoring model with the one to one check ins as a part of that. And over the years we started leveraging technology to enhance those check ins. I’m curious if you’re using AI in any way to support the one to one check ins and, and what that looks like.

MacKenzie Price: Yes, we are. So we actually mic up the guides during those one to one check ins and then they’re using, you know, we take those transcripts and we’re running them through for everything from what percentage of the time were you talking compared to the student? Right. If you’re talking too much, that’s a problem. How many questions were you asking, you know, versus stating what are some of the things that are happening there. We also actually use that technology for some of our students as well. So an example of that, one of our students in Arizona, he struggles with a growth mindset, you know and he’ll, when he’s struggling in his academic work, he’s quick to say I’m dumb or I can’t do this or whatever. And so we put an AI mic on him and then he and his guide go through daily and analyze how are you speaking to yourself? Were you being kind to yourself? And what we found amazingly is that just him knowing he has this lanyard around his neck that’s listening helps him remember, hey, speak kindly to myself. I can incorporate these growth mindset strategies.

So we’re able to do that. We have guides that wear these lanyards throughout the entire day so that they can understand and then get feedback on their coaching. And so, you know, that’s, that’s a great part of it. We’re using AI. We’re very much, our organization is very much on be AI first in everything we do. How can we always take everything to the next level and build that out? And then of course the other aspect of AI, you know, that comes across in our afternoon life skills workshops is kids are learning how to use these tools that are going to help them be successful. So you know, kids are starting to build out and develop these brainless and then build out an LLM. In fact, we actually just had a pretty exciting thing happen last week.

One of our students at our high school had built up an LLM around safe teen dating advice and she ran a research study with the University of Texas professor around basically how good was the LLM she built compared to a ChatGPT and suburban moms and they just submitted to Nature with that research information. So it’ll be really exciting in the next couple of months. We’ll hear if that gets accepted. And that should be a pretty cool thing. So that’s the other part of this is you’ve got to make sure kids are being equipped to learn how to take advantage of all these new tools that are constantly coming out.

Diane Tavenner: For sure, for sure. Let’s move to that afternoon block and unpack that a little because I think I hear far less about the afternoon time, which is familiar to me, because also in the Summit model, you know, the self directed learning time seemed to get all of the publicity in the play and whatnot. It was only two hours. It was only 30% of the young person’s grade, but it got like 90% of the attention. So let’s break the afternoon into the K8 and the high school because I think those two are different in your model. Talk about the K8. Yeah, talk about the K8 afternoon, where I understand it’s young people are learning life skills. Is this a project based approach? Who’s planning this? Is it a curriculum? I think, as you just said, students are encouraged to use AI from their side.

But what I’m really interested in is how are guides and educators using technology and specifically AI for this afternoon block, the dashboard here. What’s going on there?

MacKenzie Price: Yeah, this afternoon block is really when our guides are shining in terms of being able to plan and connect and mentor our students. And that’s done a few different ways. So when we think about In K through 8, our students are participating in these life skills workshops that are developing leadership and teamwork, financial literacy and entrepreneurship, relationship building and socialization, public speaking and storytelling and grit and hard work. And so every workshop that is created has to be able to pass two tests. One is, what is the life skill that is actually being taught and how are we going to assess at the end of the six week period whether that has happened? So, for example, you know, we’re in the week before the holiday break. We’ve got test to pass events happening at all of our schools around the country where parents and people from the public can come in and see something that’s being done that the kids have been working on and understanding. Did they learn this life skill? You know, an example that we often talk about because I think it really highlights the idea of how do you learn grit? How do you learn, you know, stick with something when it’s hard? So we have students who participate in grit triathlons. And that could be things like having to solve a Rubik’s Cube, juggling three items for 30 seconds and running a mile without stopping.

And when you can see that a kid has, you know, a third grade student has been able to understand, okay, there’s an algorithm and I keep practicing my Rubik’s Cube and I start by juggling scarves and eventually I’m juggling balls and I incorporate atomic habits to, you know, walk and run. At the end of six weeks when these students are able to accomplish that goal. And it shows grit. We also do a lot of physical workshops that build out things like grit, like facing fears. For example, we’ve got a rock climbing workshop and that actually for our kindergarteners, they’re climbing a 40 foot rock wall. And when you watch the difference between a student at the beginning of that six week period, you’ve got a five year old who’s like, I don’t even think I can hold on to one of these suddenly going 40ft up. The only one more amazed by that are their parents, right? Their parents are like, this is amazing. So a lot of physical workshops that are doing, doing things and then the guides will use AI tools as part of building out those workshops. Being able to measure one workshop that we do every year that’s very popular.

It’s a communication and basically uplifting others workshop. And the test to pass for that workshop is that kids go into an escape room, you know, one of these, one of these rooms where they have to, you know, solve a bunch of different puzzles and logic things and all that to go. And we mic the students up and we use AI to analyze what percentage of their language is considered uplifting and positive. You know, where are they doing that? We’ll do that in sports activities. Kids will get feedback on their public speaking. They’ll be using AI tools to build graphic novels, to build films, you know, all kinds of things that they’re working on that way. And so that’s a combination of group workshops. And then they also get individual time to pursue what we call kind of check chart independent projects.

Diane Tavenner: Ah, so it sounds like then your guides are using just AI, like an LLM to help them plan those workshops. And then are you rubric gradient or just checklist grading?

MacKenzie Price: We’re rubric grading as well. And so we have for each life skills workshop we’re grading, what is the quality of workshop. And that’s everything from, you know, the kids’ assessment of did they love the workshop. You know, we’re constantly surveying parents, kids to make sure that what we’re delivering is right. And how are these guys going? The thing that we’re calling it.

Diane Tavenner: And that feedback from the rubric, is that derived from the AI or is the guide doing that? And then is that also incorporated in their dashboard?

Iterating to Build Measurable Skills

MacKenzie Price: All a combination of both things. And I think in a lot of ways what we are constantly doing is iterating. How do we build upon a workshop, how do we make, are we doing each session that kind of comes together. In fact, you know, today again, it’s the last week before the holiday break. We’ve got staff days every evening, you know, after school as we kind of plan and go through what worked, what are we doing to kind of increase, you know, love of school, the learning 2x in 2 hours and then development of life skills. So we’re working through a lot of these types of activities of, you know, how can we make this alpha life core soft skills measurable? Right. How can we understand how to measure these skills versus just kind of saying oh, you know, sure, they’re learning leadership qualities, you know, from, from something. What are the things that we can do to, to kind of build that out?

Diane Tavenner: Interesting. One of the conversations, big conversations, is how AI can and should change the role of the educator. And you all have purposely and publicly redefined the role of the teacher to be a guide. And I’ve been tracking through this conversation. You know what I think some of the shifts are in how you think about teacher versus guide and educator and how AI is enabling that. So let me run this back past by you and see if I got it right. So the guide’s not planning any sort of lectures or traditional lessons and they’re not doing any assessment. They’re leaving that to the technology.

They are doing one to one check ins and they’re getting feedback from sort of AI inputs from their recordings and things like that about how they can improve. So that takes time. We know in a teacher’s day if you’re transcripting all of those things, they’re going to an educator’s day and then they are planning the afternoon workshops. It does sound like they’re doing some of the assessment there. And they’re certainly, you know, working closely with the students on the motivation piece and engaging directly with them. And it does sound like that’s supplemented by AI. Did I get that right? Sort of the role of the guide, if you will.

MacKenzie Price: Yeah, you did get that right. Now there’s one other aspect of the guide’s job, in the morning academic time, in the core time. You know, I think people have this, this misconception that oh, you know, you’ve got a kid, a group of kids that are just staring at computers with no adults in sight. Our guides are there and they’re engaged, but they’re not there to teach academics. So if a kid says, hey, I’m struggling with this, you’re not going to see one of our guides saying, okay, let me, let me show you how to work through this problem. You got to carry the one. Let’s do a tutoring session on this. Instead.

They’re going to be basically asking students questions to help them understand if they have used their resources. So, hey, were you able to watch the video? Did you go into the resource library to find another answer? Did you check these kinds of things out? And so that’s where they’re really providing coaching around how to go about learning to learn. Here’s one. I don’t know if you call it an exception, but one thing I will say for our younger students, our kindergarten, first and second, we have not found to this point a replacement for reading than that one to one reading time. So we have reading specialists at all of our schools for our younger learners who are working with students on reading. And our students get one to one pull out time, you know, to be practicing that reading. It’s something critical. We are seeing, you know, certainly some great progress and success around learning to read.

But you know, you have to have that time reading out loud with a human. And so that’s the one thing I would say is our guides in our younger levels, we do have certified like reading specialists who are at those schools. And it’s, it’s critical.

Diane Tavenner: We didn’t talk about the high school afternoon time. And as I think you alluded to, and as I understand it, this is where young people are picking one project to work on for four years. And again, I don’t know if that’s a headline or if that’s accurate. I must say this is an element of the model that gives me a little bit of pause and so I’d really love to underbutt a lot of buzz. So what’s actually happening for high school students for those four hours, four years?

MacKenzie Price: You know, so we have two tracks for our high school. We have what we call an honors track. And the idea of that honors track is basically kids who kind of, you know, want to be sort of Ivy League bound. They’ve got ambitions of going into a top 20 university. And so in that program we’re basically saying, okay, we’ll deliver 1550 SAT score scores, you know, fives on at least a few hard AP courses and what we call an Olympic level Alpha X project. This is a project that is as impressive as being an Olympian. You know, what is it? So an example of that, one of our students who just got accepted to Stanford this past week. She’s the student who’s also submitting her research to Nature.

If she’s accepted, she’ll be the youngest female ever and the only high school student in history. You know, to be able to do that, you know, they work on something big. Now during that time when they’re working on these Alpha X projects, there’s no question that you’ll have kids who might, they might decide to change their project 10 times during their four year experience. What they’re really developing is the skill of learning how to go deep into something and become an expert. And so we’ll do things like they’ll go into, you know, two week long sprints where it’s like, go learn everything you can learn about this subject. And at the end of that two weeks, you know, just as often as not, you’ll have kids come out and go, actually it turns out I’m not interested in that. I want to go into something else. And the other thing is these projects that kids work on aren’t necessarily what they say, oh, I’m going to do this for the rest of my life.

Right. I’m going to go build this out in college or something. But it’s a project that they’re kind of, you know, able to develop and go deep and become an expert on. Now we also have a non honors track at our school and that non honors track is for kids who say, you know, I really love the idea of getting time back to just go do things I’m interested in. So for example, you know, we’ve got a student who wants to get his pilot’s license and he loves the idea of flying planes. Now does having your pilot’s license at age 15 get you into Stanford? Yeah, you know, maybe not, but it gives you time to go develop these things. So a lot of our athletes who want to have time to pursue their sports or whatever. Now what all of our students do, and that non honors program basically is 1350 SAT, which is, you know, top 10%, fours and fives on APs, you know, and time to go and develop the interests that they have. Honors students are spending about three hours a day on their core learning.

The non honors track is about two hours of what they’re doing. Kids are still taking AP courses, they’re still doing all those kinds of things.

Diane Tavenner: Sorry, you lost me for a second. Where’s the AP course? Is that in the afternoon or in.

MacKenzie Price: No, that’s in the morning. That’s the core academic time is students are taking four years of English, four years of math language or, you know, foreign language, all that kind of stuff. So they’re doing that in the morning. Afternoons are for working on these Alpha X projects. And then we do a lot of workshops around life skills for all of our students. So that’s everything from rejection training to giving and receiving feedback, you know, leadership challenges. A lot of things that students are working to kind of build out those skills is what our high school program looks like.

Diane Tavenner: So in the high school afternoon, there is sort of still a framework curriculum. Maybe it’s not every day, all the days, but that you do have some of these skills that you’re doing in some workshop, being around with students.

Developing Projects with Real Impact

MacKenzie Price: Yeah, there’s absolutely a framework. And then for the kids who are working on their Alpha X projects, they basically go through different levels, right? So, you know, as an example of the kind of the highest level where basically these kids are getting out and they’re launching real businesses or activities. One of our students, who’s the senior this year, she’s working on getting a musical launched on Broadway. So she actually spends, you know, five, five to seven days a month in New York City, you know, working on recording with producers, meeting with potential investors, you know, doing those types of activities. So she’s kind of been released out into the wild, you know, in some ways to go work on these projects. But the other thing that we have in common is every day our students are spending an hour working on their brain lift. So this idea of whatever the interest they have, they’re staying current on research, what’s going on, and they’re using this brain lift to then build out whatever their LLM and GPT is based on this. They also work on things like creating a spiky point of view.

So an example of that, we have a student named Alex who is building a plushie doll that is basically a mental health coach. And his spiky point of view that he’s built is he believes AI can actually provide better counseling to a teenager than a human counselor. Now, that’s a very spiky point of view, right? Especially when you think of all of the dangers on this. But he’s built certain things in his system that he believes are making a successful AI mental health coach. And so the idea is building out these things and being able to learn how to become an expert on using AI to build this thing out. So we have another student who’s interested in creating. He’s a filmmaker and wants to create, you know, his ultimate goal is to create an Oscar winning, winning film.

And part of what he’s done is to create basically a spiky point of view around how filmmaking can be done. And he just got accepted. He reached out to a bunch of different podcasts. He got accepted and invited on three podcasts. Now a lot of rejection training going on in there as well, where there’s a lot of podcasts who say no answer, you know, or whatever it is they do. But they’re learning all of these skills during this time. Plus getting the traditional academics that, you know, students in a normal school are getting.

Diane Tavenner: Where would science labs fit into this model? Or, you know, projects that are in history where we know kids, you know, dates, facts, information is, is based, but you actually need to understand the big themes and trends. Where does that fit in your model?

MacKenzie Price: Well, if you take things like science labs. We don’t have science labs. Our students are taking AP Biology, AP Physics, AP Chemistry. But they are, you know, watching great YouTube videos that are exploring these topics instead. We haven’t found that there’s this critical piece of getting kids in a lab doing beaker experiments, you know, as part of what they’re doing. They can watch these things. Now. Kids who are really excited about something that they’re working on, you know, in science can go in and build something out.

So for example, we had a student who got really interested in cancer research and epigenetics, and she ended up going out and creating a documentary that’s been viewed over 5 million times around cancer and epigenetics. So we kind of think like everything we do at these schools is taking an interest or a passion that a kid has and figuring out how to get them out in kind of real world experience with things and how they can build. We had a student who loves physics, really interested in science, loves physics. He also went on to become a professional water skier, but he would take physics principles and then work on how he could improve his water skiing times and rope length, you know, incorporating physics principles. So there’s things they do there, things like history, for example. You know, students are taking AP World and AP European and AP US History. So they’re doing all those things. They’re getting a lot of experience on writing, obviously, as they’re, they’re learning on apps, they’re coming out with, you know, fives on their APs and doing very well, and they’re having some connected time with each other where they’re, they’re basically going through some checkpoints at the same time.

Where they’re interacting last year towards, you know, basically in April you heard a lot of singing because kids had basically used AI tools to help them remember a bunch of their facts for AP world history, you know, with basically in the, in the same vein as Hamilton lyrics, you know, and, and working through those things.

Diane Tavenner: Is that the College Board’s digital curriculum that they’re using for the AP courses? Yeah. And then, that like joint collaborative time would be in the afternoon. Is that how it connects?

MacKenzie Price: Yeah.

Diane Tavenner: Got it. Awesome.

Michael Horn: This season of Class Disrupted is sponsored by Learner Studio, a nonprofit motivated by one question. What will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers and for civil thriving? Learner Studio is sponsoring this season on AI in Education. Because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about Learners Studio’s mission and the innovators who inspire them at www.learnerstudio.org.

Michael Horn: This has been super helpful, MacKenzie. Huge thanks. But before we let you go, we have this segment where we, where we get away from the conversation around education generally, although not always. Just things we’ve been reading, watching, listening outside of work if you can. But if not, that’s cool too. So we’ll let you have the first say at it before Diane shares what’s been on her list.

MacKenzie Price: Well, I’m sure that I’m going to give you an answer that is not going to be impressive to any of your followers or listeners.

Michael Horn: I guarantee you most of my answers are unimpressive. So go ahead.

MacKenzie Price: My absolute favorite thing to do in the evening when I get time to relax is I love to take a bath and I have a huge television that is mounted in my bathroom in front of my bathtub that is non-negotiable. My husband and I just moved into an apartment a year ago and I was like where is the TV in front of the bathtub going to go? Like I will not move into an apartment that doesn’t have that option. And I got in the bath last night and I was so excited to watch the Taylor Swift Eras documentary. So I am halfway through the first episode. My girls and I, and actually my husband too, we totally bond over that. And then actually later in the evening my daughter’s home from college and we’re watching this show called All Her Fault. It’s like about a kidnapping and it’s the gal from Succession, you know, the redhead from Succession, she stars in it. And one of the guys from White Lotus season one.

So I do. We like those types of shows. We loved White Lotus. This All Her Fault. I just watched the Beast in Me. So I do, I sometimes can be known to binge some of these Netflix shows, but I do them in the format of about 35 minutes, which is how long my bathtub water stays hot for. And then I’m out of time.

Michael Horn: And then you’re out.

Diane Tavenner: There you go. Well, I’m totally, I’m totally cheating today. I’m gonna share a novel that I’m going to read over the holidays. My favorite living authors, Ian McEwan. And he has a newish novel out called What We Can Know. And I, I’m literally counting down the days to the holidays and to being able to crack this one open and savor it. I’ll give you two sentences from the New York Times review that make me excited. Quote, it’s a piece of late career showmanship.

McEwan is 77 from an old master. It gave me so much pleasure, I sometimes felt like laughing. I will report back.

Michael Horn: And you’ll have to report back because I was going to say you just quoted the New York Times, which is an item for later but yeah, so, all right, I’ll wrap with mine, which is MacKenzie, to your point. We binge watched Four Seasons with Tina Fey and Steve Carell. It’s a Netflix. I hadn’t heard of it. It’s like an eight episode first season. There will be a second season based on the cliffhanger at the end. And I would say it’s about three couples, sort of 50s age group is roughly where they are and through trials and tribulations that is hysterical.

A lot of predictability and yet still very funny as it went through. So we really enjoyed it and I think binge watched it in two nights. I think so.

MacKenzie Price: Oh, great. That might be our holiday activity too for some time.

Michael Horn: There you go adding to your.

MacKenzie Price: I love that. I love that.

Michael Horn: Awesome. Awesome. Well, MacKenzie, huge thanks and as always, huge thank you to you, all of you, for listening. Keep coming with your questions, comments and all the rest, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
AI Optimization鈥檚 Impact on Use of Time, Space and Resources in Schools /article/ai-optimizations-impact-on-use-of-time-space-and-resources-in-schools/ Wed, 18 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028633 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Imagine being able to build a master school schedule in 30 minutes.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


On this episode of Class Disrupted, Paymon Rouhanifard, CEO of , joins Diane Tavenner and Michael Horn to explore how AI-powered optimization is transforming a complex challenge in K鈥12 education: the master schedule. The conversation touches on the critical role that master schedules play in shaping student experiences, resource allocation and district priorities. Rouhanifard explains how Timely identified a pain point schools face with traditional scheduling methods and applied an AI-driven approach that saves hundreds of hours while enabling systemic change and better use of resources. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, this is Diane, and you’re about to listen to an interview that Michael and I had with my friend Paymon Rouhanifard, who is the CEO of Timely, which is a company that’s helping schools figure out how to do their master schedules in a way that’s aligned with their values and what they’re trying to do to support their young people. And I love this interview. I think it’s so fun for us to really talk with someone who deeply understands schools and how they work and the operations of them and what’s going on and who is really trying to add value using AI in a way that feels very concrete and specific. And I just think you’re really going to enjoy Paymon’s thoughtfulness and his deep understanding of education and this really specific application of how AI is being used in education. 

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you. And I’m truly excited for today’s guest, someone we both know pretty well, who has been doing some very interesting work, some of the early innings of which I got to see up close because his company was incubated as part of Workshop Venture Partners, where I’m an advisor. And like Laurence Holt, he’s been on my Substack before, . So I’m excited for this conversation to dive a little bit deeper into what he’s doing and how it interfaces with AI.

Diane Tavenner: I agree, Michael. I’m excited to have Paymon on our podcast. We met when Paymon was leading Camden and I was leading Summit. And it’s interesting because I think fortunately for me at that time it was in a learning space where we met and I did a lot of learning from Paymon and with Paymon, well, I’ll speak for myself. I did a lot of learning and feel really grateful that he’s here with us today. And so let me just tell those who don’t know a little bit about who Paymon is. He’s the co-founder and CEO of Timely, an education technology company that helps schools build better master schedules through AI optimization. Prior to that, he was the co-founder of Propel, which offers tuition free health care job training and is currently the chair of its board of directors. And as I said, he was the superintendent of Camden City Schools in New Jersey, among other roles in public education, from teacher to administrator.

And so Paymon, welcome. We’re so happy to have you here.

Paymon Rouhanifard: It’s really great to be here. Thanks for having me.

Michael Horn: Well, so I’m excited. But let’s levelset with our audience and start at a high level and just help us understand exactly what Timely does and what problem it’s solving for school districts and school systems.

AI-Powered School Scheduling Support

Paymon Rouhanifard: Well, as Diane just mentioned, we help middle and high schools build their master schedule using AI optimization and dedicated support from a team of former educators who have built schedules before and also support with data integration. And I think to really understand our work, you have to understand the importance of the master schedule. And there’s sort of two parts to it. Part one is every school, including elementaries, although they have slightly less complicated schedules, but every school in the country has to build their master schedule every year, typically in the spring for the following fall semester. And it is an incredibly painful exercise at the school level where folks have just historically been using really clunky tools. And then the second part of it is the opportunity for systemic change and the connection to the central office to think about resource allocation more strategically, to think about priorities more strategically. And so there’s sort of those two components to it. But tactically, what we do is we help middle and high schools build their master schedule.

That is a painkiller at the school level. And again, can kind of enable key priorities at the central office.

Diane Tavenner: That’s awesome. Paymon, one of my, I’m going to disclose something weird here. I am like a fanatic about the master schedule. When I used to build the master schedule, I was like a lunatic around it. So I’m actually very nerdy and excited about what you do. And one of my concerns is that most people, when they talk about AI in education, the only image they have in their mind is literally a chatbot, you know, that’s mostly focused on the students or the teachers used in the classroom. And, you know, as Michael and I are shifting our conversation from sort of big picture AI to actual practitioners and the usage of AI in education, I really wanted to talk with you, and I’m glad we’re doing it first, because you’re working on the system of school, if you will, and your instance of AI is not a student directly interfacing with it, but has a massive impact on the student’s experience.

Because literally the master schedule is everything. I don’t think people realize that. It is sort of the infrastructure that controls almost everything. And when you’re in a district and you realize that, you realize all the power is in the master schedule. Right? And so tell you said it’s a pain point for schools, but paint that pain point a little bit more for us. Like, what problem were you setting out to solve for them, yes, it’s, like, laborious and kind of hard. But you know what? How does solving this lead us in a direction that you believe in in schools?

Master Scheduling: The Complex Puzzle

Paymon Rouhanifard: You know, we often say that those who know about master scheduling really know, and Diane, really appreciate that you’ve been in the guts of it in your prior lives. And if you were to ever talk to an assistant principal at a middle or high school and you asked them about the master schedule, like, their eyes will get widened and then they’ll, like, have a lot to say. Typically mostly horror stories about how hard it is and how they lock themselves in a room every spring and don’t leave that room for weeks, and they’re bruised and battered and they have a final master schedule. And so the reason for that is the schedule is just a really complicated puzzle to put together: What courses you’re going to offer. What courses students need to graduate depending upon the graduation trajectory they’re on, what credentials teachers have and what courses they’ll be teaching, what rooms are available, what other constraints in terms of collective bargaining, consecutive periods taught. There’s just course requests 鈥 you know, from students might be kind of most fundamental to solving this equation 鈥 but it is a lot of different variables. And folks are using tools such as Google Sheets, whiteboards, sticky notes.

We’ve seen giant Magna Tile boards with a lot of our district and charter partners, and we take pictures of them and save them for posterity. And so that’s, again, speaking at the school level, of what a painful exercise it is to put together a really complicated puzzle that is fundamentally a math problem to solve. It’s a mixed integer, linear math problem.

Diane Tavenner: Yeah.

Paymon Rouhanifard: And to your point for the systems level, that’s, I think, where it gets really interesting and I suspect a thread you may want to pull on.

Diane Tavenner: Definitely. Let’s start a little bit with just understanding more. You just said it’s a math problem, which is now we’re getting into AI. I don’t know that everyone realizes that AI is really mathematical in many ways, but help us understand where the AI is in Timely and what, you know, do people in the schools even realize that you’re using AI?

AI Optimization Over Generative AI

Paymon Rouhanifard: Yeah, I would say because of the AI boom that we’re in. A lot of folks understandably believe that we use generative AI, but we don’t. And we use AI optimization technology, which is under the broader banner of AI and machine learning. And so the reason for that and implicit to the question you just asked is large language models being predicated on words are not really good at math. And I think we’ve all seen stories of the large language models struggling with basic math and hallucinating. And we’re solving a really, really complicated math problem. And so we are training off of a local set of data, using AI optimization to do that heavy lifting, to ultimately solve that math problem school by school. And so we don’t use a chatbot, but instead we think about it as a series of inputs in terms of course data, student course requests, staff room information, and then you layer on a number of constraints in terms of, we’re telling the AI optimization engine, 鈥渉ere are the things that we know have to be fixed. This teacher needs a prep in 8th period. This common planning period needs to be at the start of the day,鈥 whatever it may be. There’s a million different examples of that. And once you enter those constraints, you push a button and then it solves that math problem for you.

Diane Tavenner: And I’m assuming it’s also pretty quickly Paymon, because I remember, oh my gosh, back in the day, I mean this would have been in the like early, early 2000s. I mean there was like a, I had a computer program that did this, but literally I could only run it a few times because it would take, you know, sometimes 24 hours literally to run it. I would have to put my stuff in there and then hit a button and then go away for 24 hours and cross my fingers that something would come back. I’ve also done the Post-it Notes on the board too. So like, I don’t, if you haven’t done this, you don’t appreciate how insane this process really is. And oh, by the way, everyone’s mad at you when you’re done because you never did it right.

Paymon Rouhanifard: Yeah, you can’t, you can’t please everyone with the master schedule. And yeah, I mean we obviously like to track a lot of data, outcomes oriented organization. And what we see is on average, folks are spending hundreds of hours, hundreds of hours building that master schedule. And so, you know, there’s a process in terms of onboarding, ingesting data, setting those constraints, and that takes a little bit of time. And then you push the button. And on average we’re talking about, sometimes the schedule is built in 30 minutes, sometimes it takes a couple of hours.

Diane Tavenner: Yeah.

Paymon Rouhanifard: And in the grand scheme of things, you’re saving hundreds of hours. And so at the school level, it really does create that, that sort of time efficiency.

Michael Horn: So I want you to double click a little bit more about why this wasn’t possible previously because you mentioned Google Sheets, Diane mentioned the software program that would take 24 hours to run. We know that there have been a few startups in the master schedule space. You know, maybe a decade ago, I think there were a couple that got funded and stuff like that. What is different about this moment where you could use this AI optimization that wasn’t possible say five, 10 years earlier, that you’re able to take a process that’s hundreds of hours to a 30 minute of output and then I imagine some iteration?

Improving Clunky School Scheduling Tools

Paymon Rouhanifard: Well, I would say there are two things happening here. Certainly the technology, as we all know, has gotten better over time and even in the last two to three years, significantly so. But I would also add that because we are a focused solution and when you think about the status quo and in what ways we’re disrupting people, for the most part they’re using these clunky tools because the solution that they purchase to solve their master schedule is the student information system. For the student information system, the scheduling module is one of many, many different things it does. And so if you talk to any superintendent, assistant superintendent, head of a charter school about their student information system, usually they tell you it’s clunky, it’s hard to use, it’s a necessary evil, it’s the repository of data, it’s the source of truth. And then it has an attendance tracker and a grade book tool and a master scheduler. And what we’ve learned through lived experience. A former superintendent, my co-founder, was a teacher in Boston Public Schools, who’s our chief technology officer.

Pretty much everyone on our team has had school based experience. We know that that status quo has not allowed folks to build schedules that are easy to build and two, are strategic and connect at the systems level. And so it’s about creating a dedicated systemic solution that frankly could have been built sooner. But now with better technology and a more dedicated approach to solving the problem, I think it’s allowed us to gain some traction.

Michael Horn: It’s super interesting. I’d love to hear some stories about districts and charters and how they’re taking advantage of this, how they are allocating resources differently, perhaps to better optimize the use of time and space and the impact you’re seeing and numbers like what you know, how many schools are you serving and what are the sorts of stories that show how they can now rethink use of time, space, resources across the school when they get to play with the master schedule in a way that they hadn’t before.

Paymon Rouhanifard: I’ll start by just saying that when I think about the moment we’re in with AI and connecting it to priority moments of innovation and sort of mass adoption of technology. So I’m thinking about certainly post Covid and adoption of technology across schools in a significant way, the personalized learning movement before that. What you see is a lot of different solutions entering the marketplace. And I would argue that most of those solutions, and this is not a critique, but most of those solutions are at the individual level. They’re used by classroom teachers, used by students. Rarely do they connect across all schools in a systemic way. Rarely do they connect to the central office in a systemic way. And sometimes and oftentimes I should say that is the nature of innovation.

You need to have a very dedicated point solution and really figure that out in the same way that we started. I think what makes scheduling unique is that it’s not just about the painkiller at the school level and helping your AP and your counselor save their summer, basically to get their summer back and not have to be banging their heads against the wall, but because the schedule should reflect your fundamental priorities as a school district. So when you, when you zoom all the way out, 80 to 85% of your budget is your personnel. And the schedule governs how your personnel are interacting with students. And that fundamentally reflects the student and teacher experience, your academics, your budget and your staffing priorities. And so the schedule before Timely was always this black box that was created on a Magna Tile in one school, in a Google sheet in another school, in an Excel spreadsheet in your third school, and so on and so forth. And then they’d use the student information system to kind of do the last mile and put it in and call it a day. But never did the central office get the opportunity to, to connect those dots and to think about what are our district wide priorities academically.

What are our staffing and budget priorities and how can we reflect that in the schedule that again governs 80 to 85% of your budget. And so that’s, I think, what makes Timely really unique. You know, in this moment where we have a lot of point solutions that are serving individuals. In terms of where we are as a company. Michael we started with a really small pilot serving a handful of schools three school years ago. The following year we served about 80. You know, last year we closed around 300. Right now we’re up to a little over 400 schools across 17 states. So we’re still a young organization, but we’ve, we’ve seen a lot of momentum and we’re really grateful for that.

Michael Horn: But I know you got a couple of great case studies. Maybe just give us a couple examples of how schools have used that to allocate resources very differently or things they were surprised by before they looked into it through your tool and then all of a sudden said, holy cow, how can we change this?

Paymon Rouhanifard: I’ll give you two examples, one district and one charter. We worked with a district in West Texas, Lubbock Independent School District, which has about 25,000 students. And like many other urban and rural school districts, it has seen declining enrollment as their special education population and emerging bilingual population has increased in terms of a percentage of the total enrollment. So one way to think about that is overall budget declined, but the needs of students has increased. And so doing more with less is a very common refrain in district lands across the country. And so what Lubbock did, across 14 middle and high schools, through implementing Timely and building a scheduling process alongside us, they identified 37 vacant positions, teaching positions that they were planning to hire for, but realized they didn’t need to hire for them. And the reason for that is they identified staffing inefficiencies through the master schedule. And by the way, I felt this acutely when I was a superintendent, where I walk into one of our high schools and I walk by a class with six students and another one with 33 students.

A lot of variants, a lot of inefficiencies, because that schedule is so hard to build. And you skip a lot of those steps because those steps are just so hard and complicated. And so what Lubbock did was they eliminated those 37 vacant positions and three things that are really important to call out. One, the average class size target was the same as the year before. They didn’t eliminate any course offerings to students. A student choice was not impacted, and three, no teachers were impacted because these are vacancies. So strict inefficiencies that led to bottom line savings. And they took those bottom line savings and reinvested them into new academic priorities.

37 positions in West Texas dollars is about $2.2 million. On the east coast and west coast, it’d probably be close to $4 million. So really meaningful savings. The second example, charter management organization, Noble Schools in Chicago. Seventeen campuses, largest charter management organization in the city of Chicago. They’re solving a different problem. They felt that their staffing model was tight enough, resource allocation was less of a priority for them, but they needed to solve that pain point at the school level.

And in particular, they had a big challenge with directors of operations being trained and supported because there was a lot of burnout. It’s a really hard job. Directors of operations for charters tend to be the equivalent of an assistant principal without academic responsibilities. So they’re in charge of master scheduling and a whole array of other operational tasks. And so for them, they had a lot of new schedulers, new directors of operations, and this allowed them to mitigate that attrition risk and to kind of create a more sustainable role. And I think what was really cool, 11 of the 17 schools had a new director of operation. And those 11 gave us a perfect 10 out of 10 NPS. And so making a job easier, creating greater productivity, and certainly still giving Noble the opportunity to think about resource allocation more strategically, although that just wasn’t as much of a priority for them.

Master Schedule as Innovation

Diane Tavenner: I love those examples because they feel very, very familiar to me. And I think anyone who’s been in that, has had these experiences and would recognize what a big deal it is. You just, what you just said, what a gift you’re giving. And I think in this moment in time where everyone’s kind of enamored with the tech, they forget how hard it is to literally just run schools every day. This massive, complicated operational challenge. And like you said, the master schedule is an expression of your values and what you care about, in so many ways. And so I think what you’re describing, and correct me if I’m wrong, Michael, because this is your area, but is you really built a sustaining innovation? I mean this is an innovation for how we do, you know, do the most important thing that controls what all these people are going to do for a whole year, all day, every day. And so that’s one framework we talk about a lot.

Another thing, a newer one Michael and I are kind of playing with, is this idea that, you know, most of our, well, I would say all of our schools in some way shape or form fit in this, this original kind of industrial model of schools. And we’ve talked for a long time about how to break out of that industrial model. I think some of us are hopeful that with the advent of AGI, we will kind of be able to invent that post industrial model by. I don’t think we’ve seen it yet. I’m wondering how do you think about, how do you, or do you think about that kind of post industrial model, for example, Paymon? Like, you know, I think in that new model we probably don’t conflate time with credit. And so we’re much more probably in a competency based progression. Does Timely move in that direction take us there? Of all, like, how do you, how do you think about the product and its evolution and where it might take us?

Michael Horn: And Paymon, while you gear up for that, I’ll just geek out for one second because I think it’s interesting. It’s a sustaining innovation for a school, but you’re clearly disrupting the landscape of how we schedule today. So it’s like it’s one of those things, right, where you’re doing both depending on the paradigm or framework. You’re looking at it through it, which is fascinating.

Paymon Rouhanifard: Diane I love the question and coming from you, I’m always, I’m always a little circumspect because you study this point, obviously, so do you. Michael and so I’m not sure if I’m going to have anything new to offer that you haven’t already thought through. But I will say what gets me really excited about the work that we do is ultimately we are a tool that can operationalize the hopes and dreams of the district of a charter management organization of an independent school. We don’t have a view as to what their delivery model should look like. We don’t have a view into what their strategic plan should be. If they ask us for advice, we’ll certainly give it to them. But we want to operationalize those hopes and dreams. And so to the extent that they’re innovating and certainly we have a lot of partners that are pushing the envelope, I, I will say, and we can come back to this or we can leave it alone the moment we’re in and not just, not just with AI, but just where districts are and declining enrollment and, and, and a lot of fiscal pressure.

I can’t say I’m seeing as much innovation as we did pre Covid.

Michael Horn: That’s interesting.

Paymon Rouhanifard: You know, having said that, we have partners that are trying to rethink the teaching profession and are trying to give a full day of professional development for teachers, which is not an easy thing to do in the construct of a traditional school district. And we’re a tool that helps operationalize that. We have partners that are thinking about, oh gosh, first year teachers. We see so much attrition and it’s really expensive and it’s really disruptive. How can we in the master schedule build in a set of professional development supports, mentor teacher who has a prep that coincides with the first year teacher to observe and vice versa for that first year teacher to see the mentor teacher and then build in common planning time. That is very intentional for first year. These things are really hard to do using sticky notes and Google sheets. And so we’re helping operationalize where that innovation is happening.

And maybe those are more modest examples of innovation that would, you know, competency based and kind of eliminating seat time. But ultimately Timely is vision agnostic strategy agnostic. And that gets us really excited.

Diane Tavenner: Me too. Because I think that when people build something with a complete point of view, then it’s not… You actually close down innovation. Right. You don’t. You don’t address the problems that exist. You don’t let people really imagine what’s possible and support them in that.

I can’t resist. I got to go back. Why do you think there’s not as much in it? Why are you not seeing as much innovation, what’s happening on the ground? And do you feel like it’s shifting at all?

Paymon Rouhanifard: I’m gonna come back to why I think it’s shifting. I just think in a lot of states. Well, across all states, we all know that the overall enrollment across all school types has been declining over the last five to seven years. And that’s a combination for a lot of factors, but the declining birth rate being a big one, of course. And so that leads to smaller budgets. And in urban and rural quarters in particular, you see a commensurate increase of the percentage of students with an IEP and percentage of students who require multilingual support. And so that fundamentally shifts the mindset of district leaders.

Diane Tavenner: Yeah.

Navigating Fiscal Pressures in Education

Paymon Rouhanifard: And makes it hard to innovate when you’re trying to do more with less, when you’re trying to, at the base of Maslow’s hierarchy. And you’re just trying to make ends meet in a lot of ways. And so what we see across the country is how can we address this fiscal pressure by doing the least harm possible. And that certainly opens the door for Timely to be of real support. And we’re incredibly proud of that. And so I think at the same time, when priority number one is we want to avoid teacher layoffs and we want to make sure we deliver resources to the students who need it the most. It’s kind of hard to get to the next series of priorities. And I think that’s just the moment we’re in until things start to level out.

What is exacerbating this is in a lot of states. And you all, I’m sure, know this. I frankly know it probably better than I do the expansion of vouchers and ESAs and kind of additional fiscal pressures on top of the macro shifts that are happening. And so whether you’re in Texas or Louisiana or Florida or Arizona, I mean these, there are a lot of states who are passing, these are innovations in their own right at the state level, but create some fiscal pressure on districts and I think that just again makes innovation hard.

Diane Tavenner: I agree with you certainly in the existing system, which is, yeah, makes me sad. Well…

Paymon Rouhanifard: I’m sorry, I’m sorry I took it there.

Michael Horn: No, let’s switch, let’s switch gears because 鈥 

Diane Tavenner: I don’t know about you, but I, I just spent last week in several schools actually on the east coast, which is, you know, we’ve often talked about this East Coast, West Coast sort of difference. It’s always fun to be, be on the East Coast and notice the similarities and differences. And I’m feeling a little bit more optimistic than I have for the last five years. It has been rough, rough, rough times, as you know, and it does feel like there’s a little bit more, you know, sort of energy back in things. But, that’s totally anecdotal. So what are you optimistic about? You know, what do you see as possible? You know, where, where is the hope going forward?

Paymon Rouhanifard: Well, look, in spite of those macro conditions, you know, we are really fortunate to partner with some incredible organizations who are figuring out how to navigate these conditions. And you know, I think both things can be true, which is it’s a tougher environment to innovate and innovation, what’s that old saying? Necessity is the mother of innovation? I think we’re seeing a lot of interesting work happening across different parts of the country and we’re serving schools coast to coast. And I, and I, and I think the moment we’re in with AI, we’ve seen super, super interesting solutions that we necessarily partner with inside of, inside of districts. And so, you know, whether it’s folks pushing foundational skills, literacy, and building that into the master schedule through block instruction and seeing organizations like Amira and Ello, you know, better serving students whether in school or at home, you know, we’re seeing a lot on those fronts. And we’re seeing, I would say, districts that are thinking much more long term in nature, which frankly is refreshing. I do think that there’s been a little less and I don’t have the data to back this up, but I do see folks who are much more like superintendents tend to churn pretty quickly. But I’ve seen a bit more longevity in those roles. And perhaps that’s because the kind of traditional education reform playbook isn’t being implemented as frequently.

But I think what that means is that folks are kind of more playing the long game and thinking much more intentionally about resource allocation, strategy, academic priorities. So there’s a lot to be hopeful for and we’re delighted to be working with a lot of different district and charter partners in spite of these tough conditions.

Mitigating AI Risks

Michael Horn: Continuity and longevity definitely allows you to do things that you wouldn’t otherwise do if you’re sort of thinking about, oh gee, two years and a pile of dust sort of thing. But let me ask this question. You mentioned a couple AI tools in there as well that have you, you know, give you reasons for optimism. I’m curious. Sort of the same premise, but like around what you’re seeing, the conversation is very concerned around AI and how it will have negative impacts. And where do you think that conversation is misplaced or where do you think that conversation is spot on and we ought to be thinking about, you know, AI is a danger, if you will, to education.

Paymon Rouhanifard: Well, look, I think in terms of teacher anxiety that, that I think as far as the teachers who I’ve spoken to who worry AI is going to take our jobs and it’s going to kind of fundamentally change the profession in ways that may not be comfortable, to me, I think that’s misplaced. And you know, I see solutions like Course Mojo, which is a dramatic boon to classroom facilitation and can really empower the teacher to better deliver instruction and to better support students’ holistic needs. So that’s where my head naturally goes in terms of teachers using AI as a copilot and fundamentally being able to deliver instruction in a more effective manner, to differentiate it and really kind of let the content delivery happen in a much more seamless way that puts less pressure on the teacher. I think the flip side of that is we just need to ensure the other part of your question, Michael. We need to ensure that there’s coherence inside of classrooms, across classrooms and across systems. And I think that’s always the challenge with education technology. Going back to kind of earlier waves of adoption of tools. Again, a lot of different point solutions, point solutions are necessary.

Timely is an example of a point solution that has the systematic connection. But when you’re using a lot of disparate point solutions to ensure that there’s an integration and an intentionality of bringing those solutions together. And so I think a lot about core curriculum and do these supplemental tools actually holistically and intentionally integrate with core curriculum, for example? And I think that’s still a real risk that we’re facing.

Diane Tavenner: Well, and just, I can’t, I have to just ask this because I really worry about the technical capabilities of schools and school districts to do the integration of all of these point systems. You pointed out, rightly, Paymon, that you know, the big giant system enterprise system that supposedly does everything, does most things terribly for us and doesn’t meet our needs. And these thoughtful point solutions are more and more especially developed by educators who really understand it much better. But do I have the skill set and the people in a school or a district to integrate all of those things? How, how are you finding the folks you’re working with and their ability to do that?

Paymon Rouhanifard: I think they’re struggling with this and it’s rare to find a district that has intentionally and thoughtfully integrated their ERP with their SIS with their HR data and so on and so forth. And, and frankly what you see is they’re, they’re kind of constantly switching out those systems and bringing in new providers that might be marginally better, but frankly I would argue are kind of do the same thing as before. And so I think it’s a real issue now with AI agents. Could data integration be much more productive and efficient in the future? I’m hopeful. It’s still a little bit early to say, but the guts of the system where those data sets come together to inform decision makers and to allow for these systems level changes, that’s still an ongoing challenge, but I think it just starts with the mindset of really optimizing for, and solving for coherence and thinking about core curriculum and supplemental solutions in a very intentional manner and, and on a parallel track trying to bring those actual data systems together. I’ve seen districts do this. It takes playing the long game and going back to Michael’s point, like maybe we’re not rocking the boat as much as we were before with standard based reform, which is like its own thing and comes with trade offs. But if there’s greater longevity for district leaders, this is an example of something they can actually take on to really bring those systems together and to do the work of building them.

Diane Tavenner: Awesome. You should interrupt me, Michael, because I could talk to Paymon all day.

Michael Horn: I was gonna say. Well, no, I feel like we’re just starting to have a bunch of revelations here, but this has been great. Should we switch to our final segment, Diane?

Diane Tavenner: Yeah, we’ll have to talk.

Michael Horn: Have you back on. That’s the answer.

Paymon Rouhanifard: All right. That’d be fun.

Diane Tavenner: Well, as you know, we every, basically every episode, Michael and I try to turn away from work a little bit. I’m going to fail miserably today and share what we’re reading, watching, listening, and we’d love to invite you to do the same.

Paymon Rouhanifard: So I’m reading two things. I just started reading them and I, and I have to admit, like early stage, kind of founder mode. I’m not making as much time for leisurely reading as I’d like to be, but I guess one book is work related and probably doesn’t even fit the question, but 鈥淧redictable Revenue鈥 and it kind of shows you like, in terms of startup mode. I mean, I’m at the foundation business hierarchy there too. The other book I’m reading is 鈥淭he Lion Women of Tehran,鈥 which is about friendship, two women, but it’s set in Iran, which is where I was born. And in the context of it being from the 1950s and, and into the 80s where there was a lot of political change happening in Iran and our family lived through a lot of that. And so in the 50s, there was a big political tug of war where they took control of oil away from Great Britain. Really sort of charismatic prime minister who led that, which led to an even greater U.S. involvement and then, and then the Islamic revolution in 79. So you kind of understand people’s lives in the story about this friendship as a lot of dramatic changes happening in the country.

Michael Horn: Fun fact, Diane, before you go, the author of that book lives in Lexington, Massachusetts, is that right?

Paymon Rouhanifard: Yeah. Yeah. Wow.

Diane Tavenner: Wow. Amazing. Wow. Incredible.

Michael Horn: Over to you, Diane.

Diane Tavenner: Well, thanks for sharing those Paymon. I wrote them down. All of your recs are always good. So here’s an interesting one. I’m going to admit I’m not technically reading this book, but it’s being read in my house and it’s constantly being discussed at family dinner night and it’s called the 鈥淪caling Era.鈥 An oral History of AI 2019-2025 by Dwarkesh Patel with Gavin Leech. And for sort of the insiders in the AI world, Dwarkesh has a podcast that they sort of all listen to. And this is this fascinating book and it’s kind of, it’s beautiful and weird and funky.

It’s like the recordings from the podcast, but they’re reorganized and it’s part like AI encyclopedia and notes guide and, and part story and oral history. It’s really interesting. So you know me, I don’t really read non fiction cover to cover, so it’s like spots and conversations. Pairing that with, I did just finish the last episode of the 鈥淟ast Invention鈥 podcast, which I’ve already promoted here, but I just say it again because I was only two episodes in when I first mentioned it. I think totally worth it for those who haven’t gone in yet to understand the moment of time we’re living in and kind of what’s going on. I think it’s really well done and valuable and great journalism and yeah, highly recommend.

Michael Horn: And Diane, when you’re not, you know, working on , we’ll have you take our podcast of seven seasons or whatever and create a book out of it as well with all sorts of crazy excerpts. I also failed on the not-related-to-work front. I guess I alluded to this on an earlier show as well. So I’m sort of exactly where you are on this, Diane. But I finished up the founder of the Florida Virtual School, Julie Young, her draft manuscript that is part memoir and part startup story or creation of Florida Virtual School, and then her work at ASU Prep as well. And I’ll say it was, it was quite an energizing read, I know she’s going to have more edits before the book actually is out, but I’m excited for it to be out because I think for people to read it, it’ll be a bit of a breath of fresh air and it’ll cause some grappling with some of the central messages and conclusions that she has. But, I think it’ll be really good for the field to sort of go back, if you will, to the past a little bit and think about a thoughtful use of technology and education and how it looks a little bit differently from from some of our assumptions around that today, I think.

So that’s been on my mind and I will just say, Paymon, this has been a hugely stimulating conversation. I have a couple of pages of notes from this of things that I want to follow up on. So huge thanks for joining us. Huge thanks for the work you’re doing at Timely and for all of you joining us and listening to us as always keep the questions coming. Keep the comments coming. Diane and I have been energized by it and it has led to us choosing our guests from your questions directly and thinking a lot about the comments that you’ve made to us. So huge thanks as always.

And we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
Reflections on Whether AI is Actually Changing Schools 鈥 and Where /article/reflections-on-whether-ai-is-actually-changing-schools-and-where/ Thu, 05 Feb 2026 17:30:00 +0000 /?post_type=article&p=1028147 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode, Michael Horn and Diane Tavenner step away from their interviews to reflect one-on-one at the midpoint of their season on artificial intelligence in education. Diving into its evolving role in the classroom, they ask whether AI is truly transforming the system or simply being layered onto outdated structures. They explore a framework of three school models and discuss the challenges of meaningful innovation amid existing accountability systems and education policies. From these models, Horn and Tavenner analyze how one might expect transformational change to occur in K鈥12 schooling 鈥 through traditional schools incrementally changing and evolving over time or, as they argue, through fundamental migration away from the existing system.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good that you came to Boston and in the freezing cold weather, no less, to hang out a little bit with me here and have a conversation.

Diane Tavenner: It’s really fun to be in person. We haven’t done this for a long time and the timing worked out perfectly because we are in the midst of this super interesting season where we’re exploring AI and education. And we’ve had several touch points where I’m like, oh, my gosh, there’s so many things that are coming up for me that I want to talk with you about. And so we get to have a conversation, the two of us, this morning.

Michael Horn: I am looking forward to it. And I’m sure you’re going to say things. I’m going to say, wait a minute, I think I know what you mean, but double click on that. Tell us more. And so I’m excited to go deep on wherever you want to go because the conversations, they’ve both been illuminating, but they brought up more questions for me, as seems to be constantly the case with this topic.

AI Disrupting Education Processes

Diane Tavenner: Indeed. Indeed. Okay, well, let’s dive in. And I had the great pleasure of spending time with you in your class yesterday. Thank you again, so much fun. And one of the topics that came up was this idea of. I think it turned out to be more provocative than I anticipated it to be. But this idea that I started said, you know, one of the things, a phrase I read almost constantly right now and hear everywhere is AI is changing education.

And I don’t believe that that phrase is true or accurate. And in fact, I believe AI is not changing education. And, and so I want to dig into that idea a little bit. You know, I would argue that it’s creating a lot of problems for folks in education who are sort of in the traditional model of schools. But I don’t think it’s changing education yet. And what do you think about that?

Michael Horn: I largely agree. So I’ve been thinking about this, but a different wavelength because I’ve been seeing over X and the various pundits. There’s a lot of conversation right now of banning cell phones in schools, as you know, and there’s a lot of conversation of not just cell phones, but screens, period, you know, Google Classroom, all the rest, because it creates access to all these other things, ban it all sort of things. And then you see the occasional commentators saying, does anyone ever believe otherwise at this point?

Diane Tavenner: Right.

Michael Horn: And I had this moment because I think I’m seen often as the tech guy in education. But if you read Disrupting Class, what we actually say is that just layering tech over the existing system is not going to do anything.

Diane Tavenner: Right. I think we’re going to get to that idea in a moment.

Michael Horn: So I think so I guess my instinct is, I agree with you. Like I think we’re layering a lot of AI over existing processes. It’s breaking, frankly, a lot of education. So the one push I might have on you is it may be creating the impetus to ask some bigger questions. And, and I’m not just saying I’m not going down the road of just because the world is AI, therefore this should be AI but like legitimately, you know, we have current assignments where you can now hack them through AI. That’s called cheating. And all of a sudden everyone goes in a tailspin.

Well, let’s ask some questions about the assignments and the work itself is sort of my take from that. So I think it might be an interesting push. But I agree most of what AI is doing right now is layering over existing processes. Some of them, I suspect it’s making more efficient. Great, maybe some of them I think it’s exacerbating problems that already existed. Is that what you have in mind or.

Diane Tavenner: That is what I have in mind. And you brought up, you know, the, one of the biggest conversations is about cheating. Right now we’re seeing all these distortions and strange behaviors and blue books returning. And I’m sure the company that makes those is happy about that. But you know, they might be, they’re.

Michael Horn: Still around or they have to resuscitate. We should look that up.

Diane Tavenner: Yeah, I think when I think about it, what’s happening with this idea is that everyone knows that they’re supposed to have an AI policy and strategy now, but most people don’t. And so this is confusing. And a lot of people, I think AI in education right now is very kind of one offy. Like people, individual people pulling it in and people, you know, and so it’s not coherent, it’s not a strategy. We see it in sort of, you know, lesson planning and assignment making, which is related to, you know, why are we even teaching what we’re teaching to your point? And if you can cheat on it, then what are we trying to do? And then it goes down the line to a lot of fear that I think it’s injecting everything from these very high profile cases we’re seeing of suicide that, you know, is potentially induced by the AI to, to big widespread data privacy. So all of that to say, I’m hopeful. I believe the technology itself, if deployed, can actually change education. But I think humans are going to have to do that redesign and that deployment in a really strategic, thoughtful way for it to change.

Otherwise, I just think it’s plaguing us with problems.

Michael Horn: Yeah, I think that’s right. And systems structures, models, matter and processes and, you know, they’re sort of automating or, you know, playing off the existing ones. We may have a small disagreement on one thing. I’m curious about this. So, like, we don’t have many disagreements, so I’m gonna lean in if we do. I do think, so the Blue book comment aside, I can imagine that there are things we want to do in the classroom that have no AI at all involved with them, because some foundational knowledge or skill that a student can hack using AI out of the classroom is something that they actually should still work on in an analog way to create automaticity on that.

Diane Tavenner: OK.

Michael Horn: I don’t know if that’s Blue Books or what form factor. I’ll take the point there, but I guess that’s. I suspect if we break things down, there are still some foundational things we would want students to have to wrestle with that might not involve AI and be offline, if that makes sense. And then my take would be, okay, but don’t stop there. Now what are we going to use AI to create as opposed to consume with AI?

Diane Tavenner: I think that’s right. I really loved the conversation we just had with Laurence where he brought up some really interesting examples, to your point, of, you know, young people literally working together and in dialogue and, and then he talked about how AI could be supportive and enhance that. But to your point, the actual skill of having that conversation with another human and what you’re talking about is not about AI, so completely agree with that. My concern is when people are taking, you know, very old assignments and.

Michael Horn: And just dusting them off without any thought. Yeah. And I think I also think this gets the older you go, as in, I could be wrong about this. And this is, I’m sure, overly simplistic, but I think for a younger student, and, you know, I’ve got kiddos still in elementary school, so I’m still thinking a lot about that. I do think, like, that part of the landscape looks different from the older student in high school and college that, you know, it’s more problematic when you’re just dusting off that assignment, perhaps for that student.

Diane Tavenner: Right.

Michael Horn: But I do think, you know, developing number sense and automaticity with those things offline before you introduce the calculator and AI and so forth. That makes a heck of a lot of sense for a younger student. And so it’s as always with these conversations in education, I think we sort of make a statement and think it applies everywhere and there is nuance there.

Clarifying AI’s Role in Education

Diane Tavenner: That’s exactly where I’d like to go next because, so I think the dialogue around AI and education is complicated right now. And I hear a lot of people talking past each other and over each other because I think we’re using these very broad, sweeping general terms. So, for example, AI and education, and I was with a really great group of people a couple weeks ago and fortunately some really, you know, smart people noticed this talking past and talking over and called it out. And literally we went around this room and we were like, what do you mean by AI in education? And just within seconds we surfaced. Oh, well, you know, using LLMs like GPT and Claude and Gemini for instructional or operational support, using AI powered education apps, Khanmigo, Class Mojo, Magic School, AI policy development, you know, AI literacy lessons for students. And, people are literally using the phrase AI strategy, AI and education AI to mean all those things and more. And, and I’m finding that it’s very complicated to try to have meaningful dialogue when there isn’t a definition right now or people aren’t. We don’t have specificity yet.

I mean, I think some people don’t even know what AI is.

Michael Horn: Yeah, you’re probably right.

Diane Tavenner: Yeah, yeah.

Michael Horn: And it’s probably extremely fearful in those quarters. And the social media analogy is rampant right now as a result, probably because we’re not defining or breaking down. I mean, do you really not want AI to help an administrator better communicate or schedule or like really, that seems crazy, for example, on that end of it.

Diane Tavenner: And my sense is that what jumps to most people’s mind when they think about AI in education, we’ve sort of railed against this from the beginning, is literally how a student is engaging with it either in the classroom or at home. And most people have in their mind some version of some chatbot, generally speaking, which is incredibly narrow and limited, I think. And you just gave a good example of like, we could literally never bring it directly into the classroom with students. And there’s a million different uses for it in just running something as complicated as a school and a school system. And so, yeah, I guess this is just my plea for us collectively to start developing a more specific vocabulary, more intentionality. About what we mean. Let’s stop saying we’re doing AI.

Oh my gosh, everyone’s doing AI. What does that mean? And being really specific about it. And I think for me, I just want to flag as we go through the rest of this season because we’re going to have some really interesting conversations next. I’m going to push us to be really specific about what people are literally doing with AI. What does that mean?

Michael Horn: Yeah, and the conversation with Laurence, I think opened us up to that because it started to talk about very specific use cases. It occurs to me this problem has always existed in education since I’ve been in the field. Right. That we talk past each other or I remember, you know, there’s project based learning adherence to like an extreme degree. And they’ll say everything ought to be learned through projects. And then you say, well, okay, the kid learning to read though, in first grade, they’re like, oh no, no, no, that kid should get phonics and direct instruction and blah, blah, blah. And you’re like, okay, so there’s nuance, but we have to break apart, novice versus expert.

What’s the topic? What’s the goal? Right. Like, and so skill versus knowledge, as you know, that gets conflated, conflated all the time. And we don’t have precision. And so I think it’s a good plea you’re making, which is just like, let’s be more specific. What’s the objective? What’s the learner coming in with if that’s the level at which we’re talking?

Diane Tavenner: OK, all right.

Michael Horn: Where are we going next?

Diane Tavenner: To one of my favorite topics, which is school models.

Michael Horn: Okay. Yep.

Diane Tavenner: So I’ve been reflecting on a number of conversations. I’ve been having a bunch of stuff. I’ve been reading dialogue that I know that’s happening. There’s a variety of people trying to think about the future and what it looks like with AI. And there’s. I think none of these are set yet. They’re all kind of rough, but they’re starting to fall into this pattern of where people are talking about three different models, if you will, of schools. And I want to come back to what is a model in a moment.

But, but this idea that there’s. I’m going to call it, I think generally people agree that we have an industrial model school at this point. And we have had for quite a long time. We’ve talked about this ad nauseam and that. So let’s call model one sort of current industrial model. And when with the emergence of AI, model one sort of stays industrial model, but you know, AI gets used in some of the ways we just talked about. You know, like there’s, you keep all your existing structures of grade levels and schedules and teaching roles, but you have an AI enabled tools where you’re helping it to grade student work or you’re using it to lesson plan and you know, instructionally plan. You’re, you’re doing some adaptive practice and feedback.

You know, I think the stuff that people probably are more familiar with because they see it. So, that’s kind of Model 1 still in the industrial world. I’m going to jump to model three before I talk about two, because two confuses me a little bit. So model three, let’s call that native AI education. I think most people I know would argue that this has not been invented yet. It doesn’t exist yet as a model.

Michael Horn: Do we know what it means?

Diane Tavenner: I think that the way people have started to describe it I’m not sure that I agree with. And so here’s where I am on this one, which is I don’t think we know what it looks like yet. I think we’re failing in our imagination right now of what’s possible. I think it’s a moment to go into the proverbial garage and do some real designing. Yeah, but let’s call that the post industrial model. I don’t like to call it the AI model because of the definitional problems we just said, but let’s just call it whatever the next school model, like the full model would be.

Michael Horn: OK.

Diane Tavenner: So then there’s two, model two and this one gets kind of squeezed in the middle. I think some people are calling it AI integrated education. Okay. And basically the, the emerging definition I’ve heard is that it’s where you sort of modify selected structures where the sort of benefits justify the disruption. So for example, you know, you have much more interdisciplinary curriculum. You have competency based progression in certain places, you have flexibility in existing schedules in blocks or things like that. You might start seeing some of the time out of the building or, but you’re still sort of, I would argue, existing in the industrial model kind of box, if you will. Okay, but you’re, but you’re using an integrated AI approach to kind of hack some of those things.

OK, yeah, so let me pause there before I start asking my question. See if like those resonate if you’ve heard about them, you know.

Michael Horn: Yeah, no, I haven’t thought about it this way. So I’m noodling as you’re saying it, this is real time. I guess I’m curious. Like models, like a Montessori, like a classical education or the new versions of classical education we’re seeing in microschools or you know, I don’t think Waldorf fits into your typology but like where would you slot. Like those are models too.

Diane Tavenner: They are.

Michael Horn: How do they slot into the schematic?

Diane Tavenner: Yeah. Well, let’s just take Montessori as an example. Right. So in some ways it’s still industrial. Most Montessori schools still exist Monday through Friday, kind of between 8 to 3 ish. They still have a teacher, you know, one to kind-of-many class. There’s, you know, they’ve sort of released or relaxed age grade bands, although I think society kind of imposes them on them. So you, you know there’s some sort of gravitational.

Michael Horn: I mean, you know my frustrations.

Diane Tavenner: I do know your frustrations. So I still think Montessori, maybe Montessori would be kind of a two.

Competency-Based Learning

Michael Horn: That’s what I was wondering is trying, it’s like it’s not AI, not AI enabled, but it uses the technology of the 1910s or whatever it was to have broken out of these certain structures. And so it’s a very competency based math sequence. Very competency based on the learning to read part of it and probably less so on everything else is your point. And there’s still some sort of, you were born in the year of the Scorpion and whatever it is, and therefore you’re going to learn this on this date with everyone else sort of element to it, I think is what you’re saying.

Diane Tavenner: I think that’s right. And, and one of the reasons I wanted to talk to you about this kind of framing is I’ve been trying to think about what sits in the model two category. Okay. I mean it feels very easy for me to identify, you know, almost every school as a Model 1 and many of them are starting to bring in these like AI tools if you will.

Michael Horn: Yeah.

Diane Tavenner: But they’re still clearly industrial models. It’s pretty easy for me to say I don’t think we’ve seen a model 3 yet with the infusion of AI. And then I think about like for example, what we did at Summit and Summit learning.

Michael Horn: Yeah.

Diane Tavenner: I think at the high school level that might be a model 2 without AI yet.

Michael Horn: Right.

Diane Tavenner: Where again we were sort of pushing the boundaries of that industrial framework of a model to try to, you know, reimagine or re-engineer portions or parts of what was happening with expeditions, for example, what kind of breaks the traditional five period, six period day, but all but doesn’t really break the calendar, if you will, or the, you know, eight to three kind of situation. So what do you think about that?

Michael Horn: That’s interesting. So I know we could probably geek out all day and create a taxonomy. So I won’t do that to our listeners, but I am thinking like you’ve seen almost different shots of goal, like. So I think of Florida Virtual School as an example. And I’m reading Julie Young’s draft. I’m not sure I’m supposed to say this, but draft memoir right now. And it breaks certain elements of that, but it’s still course based.

Diane Tavenner: Right, right. There you go.

Michael Horn: So the two things are interesting. And then I start to wonder. Everyone’s talking about Alpha Schools. We’re gonna have an episode on it, so stay tuned. Maybe we don’t get into it here, but, but things like that, where does that slot into your framework? Or I think about Acton Academy, probably falls into two is my guess. And so this is, I guess, what I’m trying to start to sort through as you, as you frame this.

Diane Tavenner: It’s why I wanted to bring it up today because we are about to shift to start talking with people who are either trying to redesign whole models or portions of it. And I think it will be helpful for us, for me for sure, to have this kind of framing in my mind.

Michael Horn: So you can say, pull it back. So we’re talking with an entrepreneur. Okay. You’re working in number one context. You’re working in two, three, maybe the frontier there.

Diane Tavenner: Exactly.

Michael Horn: OK.

AI Tools

Diane Tavenner: And I think there’s a couple of reasons why this is important. The first is back to that, talking past and over each other. One of the things I noticed is there are a lot of people who are gravitating to sort of the AI, you know, enabled tools that will definitely improve, you know, Model one industrial model, if you will. And they’re very passionate about that. They have really strong arguments about, like, there’s kids in schools today who need things to be better. And so we should be, you know, deploying these tools as best we can to do that. Then there’s a whole other group of people, smaller, who are like obsessing about designing Model three, a post industrial model. I don’t think anyone who’s been listening will be confused about where my kind of passions and interests lie.

So I’m definitely, you know, my attention goes to this question, and this, my energy is in that direction. And I really caught myself because I can be dismissive of that first group. And I think that is really problematic for me to do that because I. There. Well, here’s my question.

Michael Horn: Yeah.

Diane Tavenner: Do you think if those models are true in the way we’ve sort of laid them out, is the theory of action or change that you progress from 1 to 2 to 3? Because some people believe that.

Michael Horn: I strongly don’t think so.

Diane Tavenner: I don’t either. Okay, good. Say more because you’re the expert.

Michael Horn: Yeah, no, well, so. So my energy is also in three, as you know. And no one listening will be confused about that. But I think it is prudent from a systems perspective, like thinking about the country, that 80% of the dollars in energy are going into the number one. I think that is from a like sound strategy perspective. Makes a ton of sense. Right. It’s where most of the students are.

It’s like classic sustaining innovation. If I’m running a company and I see the new thing coming that I think is going to upset the apple cart, I don’t push stop on what we’re doing today.

Diane Tavenner: Right.

Michael Horn: I start to test and learn what we talked about on the fringes. And then like, I start to move things out there. Okay. So that’s where I go to the statement that I don’t see any cases where number one morphs into number three or we learn stuff from number three. And I had a guest in the class say, how do we pull it back into number one? I’ve never seen that work. You’ve never seen that number three replaces number one

Diane Tavenner: So then it has to be effectively designed from scratch, grown from scratch. It’s not, you know, evolving. No. Okay. Well, some people think it’s gonna.

Michael Horn: No, I know. And I just, I. And I think it’s totally rational to be putting bets and have a portfolio strategy that are in all three buckets. And I think you can learn lessons between them. Absolutely right. I mean, we know a lot about cognitive science from number one. We also don’t know a lot, I think, because. Take growth mindset, for example.

Right. My read of the literature is incredibly powerful. And if anything in the environment undermines the message of growth mindset, it pulls the kid back into the fixed mindset view and undermines all of that intervention. And basically every structure in number one does that.

Diane Tavenner: Right.

Michael Horn: So we can have our lesson on growth mindset. I don’t think that’s the best way to do it. But like we can have our lesson on growth mindset. We might see a temporary bump on some sort of assessment and then like immediately you get the C grade in the class and you’ve been labeled because you can’t take the feedback and do anything with it. You’re not even reading the feedback and you no longer think that.

Diane Tavenner: Yeah, well, and this is the point of growth mindset not being permanent. It’s not. You don’t either have one or you don’t.

Michael Horn: Right.

Diane Tavenner: It’s a continuous state that you’re in and you can fluctuate from in and out of that state regularly. Okay, so. Well, that’s an interesting conversation to have with folks who believe that the theory of change is that progression versus what we just.

Michael Horn: And I guess stay with it one more second because I remember when we came out with Disrupting Class, a lot of people would push us and say, well, we’re talking about systems change. What are you talking about? And I think we were talking about systems change too. But my theory of system change is system replacement.

Diane Tavenner: Well, there you go.

Michael Horn: And I think it’s really hard in the US for all the reasons we know. And one of the reasons I’m in some ways more optimistic than I have been is I actually see a path for that change, that replace or disruption of systems that I haven’t seen because.

Diane Tavenner: The technology is so.

Michael Horn: Well, and the ESA policies.

Diane Tavenner: Oh, and ESAs.

Customized Education Choices Rising

Michael Horn: Right. And so we see a level of entrepreneurship, a choice and I would argue now a family increasingly, if you’re in Arizona, Florida, Arkansas, wherever. It’s not just like the free public school or I pay money, it’s like, oh, if I just default to the free public school, I’m actually foregoing 8 to 12, $13,000 that I could be spending on my kids education in the way that’s customized for what they need and what they have shown interest in, et cetera, et cetera. That’s like a very different decision set now where all of a sudden it’s actually expensive to default to the free.

Diane Tavenner: Well, and to your point, it might take a little bit of time, but it really changes people’s, you know, mindsets around everything.

Michael Horn: And I was shocked. I. I have to look deeper into this. But Ron Mattis at Step up for Students in Florida sent me this report they did. He said the number of learners in Florida who are now doing a la carte learning. So not they don’t have a primary school five days a week. It’s a billion dollar market is going through that and I was like, I have to like sit with that.

Right. Still. And I haven’t fully digested it because that’s, that seems like a lot. But he, but it basically, if that’s true, over the course of a decade or so, whatever the choice landscape in Florida has been, people went from, okay, I have education, savings accounts, I choose a school.

Diane Tavenner: Right.

Michael Horn: To your point, with technology and a lot of entrepreneurship and a change in the landscape, to all of a sudden saying I can unbundle and do a whole set of things with this, that’s a, that’s faster than I would have expected.

Diane Tavenner: That is faster. Oh, I’d be so curious.

Michael Horn: I want to dig in all sorts of things now.

Diane Tavenner: Let’s do that at some point. Well, and what it suggests is that individual families are essentially crafting their own personal model. Now is it AI native?

Michael Horn: Probably not.

Diane Tavenner: Probably not yet. But I bet they’re starting to use some of, you know, the AI enabled tools as part of that. Yeah.

Michael Horn: And they’re probably making also some of these trade offs in terms of like when is it analog because they control the home environment. When is AI a tool to create something? They’re probably making a bunch of these nuanced choices on the ground that like you couldn’t dictate from a central planning curriculum standards perspective.

Diane Tavenner: Right. Although that might be a feature of whatever the new Model 3 is. I mean, my hope is that it is that it is personalized to that degree within the context.

Michael Horn: Yeah, great point.

Diane Tavenner: Yeah.

Michael Horn: And so now we’ve just blown both of our minds.

Diane Tavenner: I want to go back to Model 2 for a minute because I had this really fascinating conversation with your, you know, former colleague and collaborator Julia Freeland Fisher. And she said, huh, I wonder if this model two is akin to what happened when the steam powered ship was sort of invented and there was this period of time where the new steam powered ships had to be outfitted with sails because the new technology was so unreliable. And she suggested that maybe model two was that. And what the interesting point she made is she said those were the most expensive models because you had to have both technologies on them. And this hybrid version is really expensive. So I, what do you think of that?

Michael Horn: 100%. I agree. I, I hadn’t framed it immediately into that typology, but that’s almost every industry, when you see disruption, you see the old players take the new technology, right. Like there’s sort of a line, oh, they ignore the new technology. Not true. They layer it on the existing structure. Right. And the sailing ships are the perfect example.

I think the first sail ships to navigate the US was like 1819 or something like. Or 1803 and then 1819, the first transatlantic ship, the USS Savannah. And they had sales and they had steam bolted on. And I think only I’m going to get the numbers wrong but like 80 hours out of the 600 or whatever it took to cross were powered by steam. Basically every time that wind went the wrong way, they fired it up and kept going. Right. And so it’s a classic sustaining innovation on the old paradigm.

Diane Tavenner: OK. But it’s still. Those models do not get us to model 3.

Michael Horn: They don’t. Yeah. It’s, you know, the story is that it was a 100 year disruption.

Diane Tavenner: Yeah.

Michael Horn: Where still ultimately the steamship native companies, shipbuilders ultimately upended the sail ship. And it was around 1900 I think.

Diane Tavenner: And it’s a different model ship.

Michael Horn: It’s a completely different model. Right. You don’t have the same components. You can do things differently in terms of construction because you’re not outfitting around an aerodynamic sail. Right. Like a totally different set of things you can do. So.

Diane Tavenner: OK, I have a question. Now, you said you felt comfortable with the field sort of spending 80% of its resources on Model 1 improvements, leveraging AI. Is there a risk that we over invest in Model 1 and undermine the emergence of Model 3 because we kind of keep this old industrial model going, breathe new life into it and there isn’t a sense of urgency around model three creating three. Yeah.

Michael Horn: Two thoughts. Clay used to always say this. The best experts in a field, like you’re a very strange anomaly. The best, deepest experts in a field are almost always consumed with the toughest problems in, we’re going to call it Model 1 at the edge of the existing paradigm.

Diane Tavenner: Interesting.

Innovation Beyond Traditional Expertise

Michael Horn: And it’s these people who are almost less expert in some way or for some reason have taken their expertise and brought it out that invent the future. But like it’s very hard to persuade the people who are dealing with the hardest, most intractable problems in the first paradigm to be persuaded to design out there. It’s why I think like, you know, when you and I met for the first time and you actually liked Disrupting Class, that was like a bit of a revelation because like we couldn’t get all these people to sort of like actually engage with it. Right. And so. Or, or they thought they were engaging with it but missing the point. Right. And so I don’t know where that goes.

Except, like, in some ways, I’m not surprised that that’s the current moment we’re in. I think the danger is if those individuals then block off our avenues to pursue three, I’m okay with them being consumed with one. I think it’s great. There are a lot of underserved kids there that need better education. And I think if they use that as a justification to block off three, through policy change, through blocking entrepreneurship, through blocking families making these choices, that would be deeply concerning.

Diane Tavenner: So glad we’re having this conversation. There’s two places where I have fear about that and.

Michael Horn: Well, you’ve lived it.

Diane Tavenner: I did, yes. Continue to, it’s my life. And there’s two places that I just want to raise here. And at the risk of how, you know, these are sort of controversial and they’re very nuanced. I often am misunderstood, so I don’t talk about them out loud very often.

Michael Horn: But thanks for doing it here.

Diane Tavenner: Here we go. So the first is the big assessment and accountability system. And you know that my belief is that that structure, which is well intended and people are deeply passionate and invested in making sure that we have real data and know what’s going on. I just spent time with a parent advocate who’s like, those tests are the only receipts we have of what’s happening with our kids. Right.

Michael Horn: There’s a great article recently around how people are just shocked because the tests have gone away and they’ve been relying on grades, which are even more worthless measures. Yeah.

Diane Tavenner: Right. And so there’s a lot of energy going to. How do we bring those back? How do we reestablish them? And, and my belief is, and my lived experience is, and most people don’t like hearing this, who believe in them, is that the existence of that accountability structure, I truly believed deeply dampened innovation and the move towards now would be model three. And I’m super disinterested in hearing about waivers and all these things. And. No, it really has an impact.

Michael Horn: Let’s get into how, because I’ve moved toward you a lot on this one. But in one standpoint, it’s like, well, it’s just focused on outcomes, frees up the inputs. You get there however you want. Like, how does it actually restrict the innovation? And is that a. And why is that a bad thing?

Diane Tavenner: Yeah, I think that it’s. Well, let me share a quote that I hear very often.

Michael Horn: OK.

Diane Tavenner: Which is, look, I’m not opposed to measuring different things but we don’t have those measurements yet. And so until we do, give me reading and math. And you know, I’m going to judge schools on reading and math, basically, which is effectively what we test in this country. And first of all, I think the problem is we actually do have those other assessments and they are crowded out. They aren’t accepted as, you know, mainstream, valid, reliable. No one is moving towards adopting them because it’s all about reading and math. And so I think it is really, you know, you measure what you value, you value what you measure. And there isn’t.

The system is not saying, no, completely unacceptable that we’re literally measuring our entire system on these two Important. Yeah, very important. Please do not misinterpret me. People always accuse you don’t want kids to read.

Michael Horn: Well, by the way. But I’m curious what you think of this. This is a classic case where I think defining the age span is important because I am strongly in favor of not losing the measures to families. Note how I said it, by the way, but measures to families on can your kid learn how to read, get those skills through, hopefully third grade. But you know, I’m. I’m actually willing to live with some variants in the age.

Michael Horn: All the reading tests after that are really knowledge tests.

Diane Tavenner: Correct.

Michael Horn: And so I would be much more comfortable, frankly, with every school picking like the. Or student, hey, you just did a deep dive on X. Go show your competency in X. I think that’d be a much more interesting. It’d be super jagged, students showing all sorts of deep dives on a variety of things and so forth. I think that’d be way more interesting. Math, I think, is a little different.

Diane Tavenner: Yes.

Michael Horn: And I don’t know where it stops. Probably around algebra, but. Yeah.

Diane Tavenner: Well, you just said a key point that really bothers me the most, which is the accountability and testing framework that we’ve had in this country is not about informing parents. And it’s not actionable data. It’s not timely data. It’s not what we would call that feedback, honest, actual timely data.

Michael Horn: No. And in fact, it’s negative reinforcement cycles.

Diane Tavenner: Exactly. And so let’s just take reading as an example. The oldest assessment technology is a reading record. I mean, schools could literally choose to assess every single kid that way and put resources towards that. It might not even be that many more minutes than they already spend on stage.

Michael Horn: By the way, AI can really do that now.

Diane Tavenner: Well, and I’m not even getting into鈥

Michael Horn: What technology can do.

Diane Tavenner: So why, why these old assessments. Right. And so anyway, I’m deeply concerned that there’s so much good intent there and so much potential.

Michael Horn: But you’re arguing that it’s crowding out a ton of these other measures that either are there or could be developed more robustly.

Diane Tavenner: Right. And in the same way that I can be sort of dismissive of efforts around Model one, I think a lot of folks focused on today and now in kids in school are very hand wavy and very dismissive of the impact this has on the potential for innovation. So I’m, you know,

Michael Horn: Super interesting. Yeah. Okay.

Diane Tavenner: The second one is

Michael Horn: You’re taking a breath, you’re giving me a look for those that can’t. We’re not a video this time.

Diane Tavenner: No, we’re not.

Michael Horn: Yeah, go ahead. Where are you going?

Diane Tavenner: Special education.

Michael Horn: Oh, okay.

Diane Tavenner: And I want to say up front, my belief is, are we, by the.

Michael Horn: Are we at the 50th anniversary of special ed at the IDA, the federal level?

Diane Tavenner: We might be.

Michael Horn: I think we are, yeah.

Reimagining Education for Every Child

Diane Tavenner: Okay. Yeah. The intention is right. So many amazing people working on behalf of kids here and most people who’ve spent so much time in schools like I have with families, you know, it’s a system that is about compliance more than it is about children, is. I don’t believe it gets young people what they need. And I think that has a really challenging impact on our ability to educate all of our children. And this is one of, in my view, one of the biggest promises of a post industrial model is that truly every child gets a personalized education.

Michael Horn: Because everyone’s now getting an ILP as a good. Exactly right.

Diane Tavenner: Exactly, exactly. And my worry is that in both the assessment case and special education, that new models, model threes, will be judged and held accountable to the current accountability systems and the law, which completely compromises their ability to design completely new and better approaches.

Michael Horn: Yeah. And my colleague, or I guess former colleague at the Christensen Institute, Tom Arnett, has written a lot about this one, about how when you apply the standards to the new system that were for the old, you hamstring and often stunt it completely. I think that’s very fair. My pushback historically has been. Yeah, but the existing system is all input driven and then it has outcomes layered over. If we strip out the inputs, which by the way, people are trying to put back on for the Attempts at Model 3 right now as well. Right. Like accreditation, really.

Michael Horn: I think you’re pointing out even though these output measures, I don’t even think they’re outcome measures, but output measures have been layered on, I do see where they could pull model three back in some unfortunate ways for design. And I think those are to me, that’s where the fears are really. It’s. It’s less the effort question in dollars and more the are we hamstringing it to actually just look like the existing thing we already have in slightly modified?

Diane Tavenner: Right. I’ve certainly learned from you the most, you know, how disruption happens is that people take it outside of the existing system. They have different expectations. You know, they look at it fundamentally differently. And so maybe this is the importance of ESAs. And I mean, as a person deeply invested in public schools in America, I would be very sad if we’re going to push all the innovation out into the private sector because we can’t welcome it into the public sector.

Michael Horn: Yeah.

Diane Tavenner: And maybe that’s what we’re gonna see.

Michael Horn: Yeah. I’ve always felt like the public officials ought to be responsible not for the institutions, but for the constituents. Right. And so the models may change. And by the way, look, in Florida, you have districts now launching their own microschools and creating certain services a la carte. And like, like they’re spinning off autonomously. Let’s see where it goes.

Michael Horn: Right. I mean, I don’t think we know the final thing yet. And the conversation I was having with one of my students yesterday as well was, you know, no one’s cracked yet, I think, in these. So they’re not really model three attempts because they’re not AI native. But let’s just call like this sort of emerging ecosystem. We haven’t seen a lot of high school models.

Diane Tavenner: Nope.

Michael Horn: And I think part of it is because disruption starts as primitive, able to solve simple problems, not the most complex. Identity formation becomes much more important in high school. Right. And all these rituals that we may roll our eyes at around Friday Night Lights or prom or whatever else, they’re part of this identity formation and asking who am I in relation to others? And these small, you know, I think, you know, Tyler Thigpen, Forest School, Acton Academy, he’s done a good job of creating rituals, but most high school attempts have not yet built that. And so I kind of wonder, is the upmarket, if you will, solving for all of those things with very different traditions that don’t look like Friday Night Lights, but are actually more meaningful for the current time around identity formation?

Diane Tavenner: Totally. Well, and now you’re getting at the heart of what I’m trying to contribute to with Futre, which is how do we support some of that positive identity formation and search for who I am and the life I want to lead, both in the digital world and then connect that to real world experience.

Michael Horn: Well, I think it’s interesting though, that your market is the traditional industrial Model one, largely. And so I’m, I mean, I’m curious how you think about that.

Diane Tavenner: I’m living in a bipolar world. Yeah,

Michael Horn: 鈥 yeah, yeah. Okay, okay, okay, okay. Well, I. You’ve built it with a modular interface, as I understand. Right. So it can exist in both, I think is part of your answer. And I, I imagine you’d say a native model 3 would actually answer a lot of the future questions as part of the design of the model itself.

Building Towards Model 3 Framework

Diane Tavenner: I think so. And I do think, you know, yes, I hope that what we’re building can live in both worlds and is one of, you know, the early ideas or components of what a Model 3 will look like. And I certainly will be engaging with folks on pushing that area, so hopefully we’ll talk more about that. I think where this is all leading for me is the next part of our season. So we’re gonna talk to a bunch of different people and I’m gonna be really. I’m gonna be in the back of my mind thinking, all right, well, where do you sit in this imperfect framework, this developing frame? But, but sort of, where is your effort sitting in that? Are you literally a whole school model? Are you an element to a model? Are you, you know, an AI enabled tool? Are you really trying to push the boundaries of designing for Model 3? Are you an interesting model two? And what do those look like? So.

Michael Horn: Yeah, well, and that’ll be interesting because I think as I look at the guests ahead, we have a lot of folks in Model 1 who are working with that system. And I’ve been wondering, given the hypothesis that we have fleshed out over the last couple of seasons of AI, like how that fits with the things that we’re interested in. And this is good. I think we’ve given a good framework on the importance, frankly, of all three of those elements and the work that they need to be doing and the dangers of crossing over perhaps, assumptions from the worlds across the different models.

Diane Tavenner: Perhaps. Awesome.

Michael Horn: This got interesting. A little spicy.

Diane Tavenner: A little bit spicy. Well, super useful for me and helpful for me to think about things. Any last things on your mind?

Michael Horn: I have one last thing. Hopefully we won’t get cut out of the studio, which is, I thought a lot about what is the world into which people are going and how does that map back to what is still core and what is not core and so forth. And I just want to float an idea by you and have you attack it.

Diane Tavenner: Great.

Michael Horn: The reflection I’ve had is we know there’s a considerable amount of cognitive science that suggests we learn best through story stories, narrative arc, and we don’t actually deliver most learning or offer learning opportunities in that. And so I guess I’ve been wondering as we think through, you know, we had the back and forth of do they need to memorize state capitals? And we both said, probably not. But I do think they should know that there’s a thing as a state capital. And so my thought about it is almost like Montessori has the I’m gonna mess it up, the great lessons or something like that. Right. And it’s a narrative arc. But I almost can imagine narrative interactive arcs where you’re like sort of, okay, how did the country’s governance evolve over time? And these thin layers that would build a lot of common reservoir of knowledge. And I think I’m largely talking K5, maybe K8, that that could be a big part.

And like in, in the various disciplines, if you will. Right. Civics, a variety of deep dives in history, et cetera, et cetera, science. I think it should be active. I think it should be multimodal. It’s not clear to me. It’s the teacher delivering the story.

Diane Tavenner: Say what you mean by multimodal, because a lot of people are using that term and I don’t think many people know what it means.

Michael Horn: Yeah, yeah. So I guess I see it as being like, you can imagine it being some of these lessons being video based through an AI. You can imagine an auditory sound. Right. You can imagine interactive where you’re actually answering questions both verbally and written as you’re working through something, you can imagine, like the state capitol one. So you have a lesson around how did state capitals evolve in state government?

Diane Tavenner: I mean, it could be VR, like literally immersive.

Michael Horn: Right, Exactly. And then you could almost imagine then like you pop out and like, my kids still draw maps. I actually think that’s really valuable. But I don’t think that they then have to drill memorizing every feature, but they don’t know what question to ask Gemini or ChatGPT without like sort of that thin knowledge base. Right. And that’s sort of where I’m wondering if you’re. We evolved to something like that that recognizes the importance of some knowledge.

Diane Tavenner: Yes.

Michael Horn: We could have mastery assessments where we thought it was really important.

Diane Tavenner: Yes.

Michael Horn: We don’t have to have it for everything, frankly, it’s just exposure is probably good enough, especially if it’s interactive. I don’t know. What do you think of that idea? What are the flaws? And sorry. And then creating the space then for like, hey, you’re interested in this? Okay, here’s your project. Go deep, right? Like, and that’s where the deep explorations of learning how to learn and developing the skills would really be.

Diane Tavenner: This feels very fun to me to think about this. And these are the types of thoughts I’m constantly playing with and that I think should influence the design of Model 3. I love that you brought up this idea of memorizing the 50 state capitals because I think maybe we are misunderstood when we both say we. We don’t necessarily think kids should memorize the 50 capitals. That’s not because we don’t love America, believe in America, think that they shouldn’t. I think what we’re both more interested in is literally having them have like a deep story about each of the capitals and really internalizing. I mean, I will tell you, we get to travel a lot. Do you, do you like how I frame that? We get to travel a lot.

And when I travel, I love this country so much. It’s so fascinating. There’s so much.

Michael Horn: It’s so much fun to dive in, right? And take the, like you’re in, you’re in wherever and you go to the Alamo or whatever it is. And like, it’s so much fun.

Deep Learning Over Memorization

Diane Tavenner: It’s so curiosity driven. And so what if young kids didn’t memorize 50 capitals? But what if they went deep on a couple of them, like in a story based way, in an immersive way, and they got the idea of state capitals and what they mean and the importance. They got very cool stories about, you know, a few of them at that age. And then they got a lifetime of like, oh, I could, there’s so many more I can learn. And there’s so many interesting stories about them. And they’re not just a name on the page and, you know, on a flat map, but they’re real places that have real significance and they’re different from each other and because they have such access to knowledge now, if they really need to go look it up, they can go look it up..

Michael Horn: They can do the deep dive. Right? And I think the knowledge conversation, I’m a big believer in the importance of a fundamental knowledge base and the depth at which those occur. I think we don’t have a nuanced conversation around.

Diane Tavenner: Right. And I also am okay with it, I’m gonna call it the Swiss cheese of knowledge.

Michael Horn: Yeah, so am I.

Diane Tavenner: That you don’t have. Every fourth grader in America does not need to know the same facts.

Michael Horn: Yeah.

Diane Tavenner: It’s okay if we learn them at different points and different times and that there’s, you know, sort of regional differences around that. I’m much more committed to everyone having a common set of really important skills, at least at a baseline level. And then ideally spike lots of people spiking in the different skills in different places because we need all those.

Michael Horn: But when you say the skills, you’re thinking that it’s been developed through them working in different domains and areas repeatedly in deep dives. Right. And so

Diane Tavenner: Because you need content to practice skills.

Michael Horn: Exactly right. And you create that integration. I think a lot of times in school it goes the other way where like, oh, we learn how to think critically about what.

Diane Tavenner: Exactly.

Michael Horn: And so again, these crosswalks extremes, I think are right. Yeah. Anyway,

Diane Tavenner: Yeah. And so, you know, and this is why we both like a project based environment because it’s the integration of the two and there’s such power in what AI can do now where you can really do personalized learning on, in the content to bring to those, you know, engaging, collaborative, communal type, project based experiences. So I mean, I love what you’re saying in the direction you’re going. It’s very nuanced as you know, it’s.

Michael Horn: We should have some more fun later on and. But I just wanted to float the general idea because I had this moment in our conversation with Alex where I was like, at what level are we thinking about difference and what does stay the same? And I think part of my reflection has been there’s actually a fair amount that stays the same, but how we’ve done it probably changes pretty radically.

Diane Tavenner: Indeed.

We’ve been recording pretty frequently and I know we’re both feeling a little stretched on thinking about new books and things we’re reading. We’ve maybe exhausted our list so I thought maybe we’d take a break from that list only today. Thank you. And replace it with this will make this episode a little less evergreen. But for those who are listening, we’re actually recording this right before the week of Thanksgiving, and I thought I would end with some gratitude.

Michael Horn: Oh, I like it.

Diane Tavenner: So one of the fun moments of yesterday’s engagement with your class and then the office hours afterwards was there for so many young, amazing people who so many of their questions were very personal yesterday about, you know, how to be a mom and lead and how mentorship and all of. And, you know, my relationship with my husband over the years. And I’m so. I’m appreciative that they were thinking about that. And one of the things that came up was just our friendship. And I think you know this. But I am so grateful for our friendship, and it is truly one of, for me, the big, you know, if there are any highlights coming out of COVID the fact that we decided to do this, it gives us time together. It’s just so much fun, and I’m so grateful.

Michael Horn: You know, I’m a crier, so I’m trying not to right now. Thank you. I feel the same way. And it’s one of those things where I feel like, how lucky am I that we get to have this conversation? Even though I moved away from the Bay Area over a decade ago, which is wild, 12 years, but. Yeah. And I think it’s. So when this comes out, it’ll be after the new year, I think, and so forth. But I always tell my students, because, as you saw, like, 55 or so percent are not from the U.S.

I say take the time because how cool is it to have a day when you get to say thanks? So thank you as well. Yeah. And thank you all for joining us through the sentimental moment. But also on Class Disrupted. And just keep your questions and curiosity coming. We suspect there’ll be things you disagree with that we said here, and we can’t wait to learn from you. So thank you, as always, and we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
Google DeepMind鈥檚 Learnings in Developing an AI Tutor /article/google-deepminds-learnings-in-developing-an-ai-tutor/ Fri, 16 Jan 2026 13:30:00 +0000 /?post_type=article&p=1027151 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode of Class Disrupted, Irina Jurenka, the research lead for AI in education at Google DeepMind, joins Michael Horn and Diane Tavenner to discuss the development and impact of AI tutors in learning. The conversation dives into how generative AI, specifically the Gemini model, is adapting to support pedagogical principles and foster more effective learning experiences. Jurenka shares insights from her team鈥檚 foundational research, the evolution of AI models over the past three years, and the challenges of aligning AI tutoring with learning sciences. She reflects on how these innovations may shape the next generation 鈥 with hope for a thoughtful blending of technology with the irreplaceable role of human teachers.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, this is Diane, and you’re about to hear a conversation that Michael and I had with Irina Jurenka from Google DeepMind. She’s the AI research lead for education there, and I think you’re going to love this conversation. It was fascinating for us to talk with someone who is literally working on the large language models from the education perspective, and at Google, no less, one of the most ubiquitous ed tech products in the world at this point, and her perspective on where AI is going, where her work is going, how it’s going to be, how she imagines it’s going to transform schools or not transform schools, and what’s important. Turns out to be a really interesting dialogue. I think you’re going to love it.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good to see you.

Diane Tavenner: It’s good to see you, Michael. I’m really excited for the conversation we’re going to have today. I find that while almost everyone is talking about AI, almost no one seems to know what they’re actually talking about, especially in the circles that I think we sometimes run in. And so I’ve always found that technology is a bit of a black box to many educators, and I think AI is exacerbating that. But today we get to talk with someone who works on and in the black box, if you will, and understands its intersection with learning. She just understands that just about as well as anyone I know. And so bringing both of them together is Irina Jurenka, and she’s joining us on the show today. Welcome, Irina.

Irina Jurenka: Thank you.

Diane Tavenner: Irina is the research lead for AI in education at Google DeepMind, and we’ll unpack all that in a minute to help people understand what that means there. She’s exploring how generative AI can truly enhance teaching and learning. And it’s not just by providing answers, but also by helping people learn more effectively and equitably. She recently led a landmark study called Towards Responsible Development of Generative AI for Education, which looks at what it takes to design AI tutors that are actually good teachers. Before DeepMind, Irina earned her doctorate in computational neuroscience at Oxford, studying how the brain processes speech and learning. Her work beautifully bridges neuroscience, machine learning and education, all in the service of a simple but powerful goal, helping every learner reach their potential. We’re so excited to be in dialogue here with you, Irina, welcome.

Irina Jurenka: Thank you. I’m really excited to be here.

AI for Equitable Education

Diane Tavenner: I thought we would just start with some really basic things to help people understand what you do. And so let me start with asking, is it fair to say that you’re both a learning scientist and a technologist? Is that how you think of yourself and will you explain to us what a research engineer does or is and help us to understand sort of your team and what you do?

Irina Jurenka: Of course. So I actually don’t think of myself as a learning scientist. I would say maybe I’m a beginner learning scientist. I’m definitely just starting to learn about this field, but I’m very lucky to be working in a company where we do have learning scientists on the team, and we also work very closely with teachers. So we actually just hired a teacher on the team and there is another teacher who is consulting us, and we work closely with the academic field as well. So Kim Collinger and others are advising us, which we’re very privileged to be in a position to have such amazing advisors. My role in education is relatively recent. I only started this project around three years ago.

Diane Tavenner: You know, we hear the term research engineer and you’re a research lead. What does that mean? You know, I think a lot of us are accustomed to the terms, you know, software engineer, but in the age of AI now we hear this term research engineer. So I’m wondering if you can help us understand.

Irina Jurenka: Of course. So I work at Google DeepMind, right. And DeepMind has always been effectively an academic lab. So when I joined 10 years ago, it was a very small group, it was incredibly academic. So I joined as a research scientist and essentially my job was to do  foundational AI research and publish papers. That’s kind of where we’re coming from. And now we are kind of much more integrated within Google, but we continue on the same mission. So what DeepMind brings to Google is this research expertise.

So on my team we have scientists and engineers, but really like the line between them is blurred. And what our job is to really think about what are the fundamental scientific problems around language models and in our case, on intersection with education, where we really need to do this foundational scientific work to understand what are the big problems, how do we find tractable solutions and also work out the solutions to these kind of big scientific problems.

Diane Tavenner: So I guess one question I know, Michael, that has been coming up for you and some of the conversations you’ve been having is do you engage with or interact with or directly influence the products at Google? So many of us in education are so familiar with so many Google products and what is the intersection of your work and for example, Google Classroom or many of the other products that we in the education field use?

Irina Jurenka: So I find it very exciting to be in a company as big as Google where there are so many amazing teams doing incredible work. We really focus on the research because that’s the value that, you know, my team can bring. We do work closely obviously with the products because they build on top of the foundational models. In our case, Gemini. We talk, we advise, we help explain kind of what Gemini is capable of, how to best elicit the kind of more pedagogical capabilities out of it. But of course the teams are amazing and they mostly bring kind of work on the products in their own teams.

Diane Tavenner: Got it. That makes sense. It’s been just about three years since most of us in the world were first really introduced to AI via the first release of, or not the first, the one that we’re familiar with of ChatGPT, and you shared with me in previous conversations that the arc of your team’s work over those three years has been really interesting. You just referenced the three years again. Tell us what you’ve been working on and what’s emerging from that.

Irina Jurenka: Yeah, we actually started the project about six months before the ChatGPT moment. So, yeah, things have definitely changed for us. Some things stayed the same. So from the beginning, we saw that the biggest impact we could make, or Gen AI, the newly emerging Gen AI we thought could make, was through AI tutoring. There are, I can go into details, like why we think it’s the most impactful thing, but maybe we can address that later. But when we started, language models were very different. Yeah, so I remember doing our first demo about six months after we started, and it was very hard to keep the AI on track.

Advancements in Guided Learning

Irina Jurenka: So we had to practice a lot, find the right kind of queries to ask. And even then, you know, it was always. We had to be at the tip of our seat just to make sure that it doesn’t go off the rails when doing a demo. And of course, now the AI is so much more powerful. You know, we have launched products like Guided Learning on Gemini app, which millions of users are already engaging with, and it’s mostly staying on track. You know, we haven’t seen any major problems so far, so it’s kind of just the technology itself has changed so much and we kind of had to keep up with these things. So when we first started, a lot of our work was trying to deal with very rough language models and make them do something useful in learning. And you will know how the stakes are so much higher in learning than in other use cases.

So we really had to think, how do we control this kind of unruly beast beneath us? And now, of course, you know, a lot of that work was essentially had to bin because it was no longer necessary. And we really concentrate on how do we bring the layer of pedagogy and adherence to learning science principles into Gemini to make sure that it really works towards increasing learning outcomes rather than negating them.

Michael Horn: Irina, I’d love to jump in there because at first I think it’s fascinating that you guys do this much foundational research, because we always hear that sort of the domain of the universities. But here’s DeepMind and then Google after the acquisition, right? Investing in over a decade of foundational research. There’s nothing near term about that. And I’m interested in this work on the tutoring because a lot of the sort of critics, I guess I would say of the AI tutors are sort of one of two approaches. Either oh, who’s actually going to use that? Why won’t they just default to Gemini straight on? Right. And two, even when they do use it, it’s maybe overly procedural, is the critique I hear a lot.

And so I’m sort of curious, what are you learning about the actual usage in the wild? What are the guardrails that you’ve thought are important? What’s been surprising against those critiques that are everywhere these days?

Irina Jurenka: Yes, thank you for asking this. We are definitely very aware of the negative perceptions and maybe negative use cases of chatbots in learning. So I was just reading an article from the Guardian earlier today where they surveyed 8 to 18 year olds in the UK and what was interesting, I think just over 60% of first responders said that they perceived AI as being a negative addition to their learning journeys. And it’s for all the reasons that we’re already aware of. AI is just too keen to give away answers. It takes away the cognitive load. Well, not like it takes away the productive struggle. It leads to kind of cognitive offloading of tasks.

We know that that’s not helpful for learning. So we kind of saw this trend when we started this project because I think at the end of the day, AI is optimized to be an assistant, right? So it’s successful when it takes away the burden from you as the user. And we know that in learning the opposite is true. You have to engage, you have to put in the struggle to actually see a difference to your learning outcomes. So what we realize is that if we don’t do the foundational work and the research to make sure that AI can deal with these two very different use cases, the kind of capabilities of AI in learning won’t just emerge from all the other work.

Michael Horn: It’s so interesting. So stay with it then, like as you’ve been putting it in this very specific use case right around the tutor. I am curious, like why did you choose tutoring given that it is so different from the other LLMs. Right. Sort of that assistant purpose. And how are you constraining it to make sure that it’s the most useful tutor. Right. That a student could have access to as opposed to maybe its natural instincts based on its foundation?

Irina Jurenka: So we chose the tutoring use case because the way we call it, it’s kind of learning or education complete. So what that means is that in order to tutor well, you kind of need to know all the different types of subtasks or capabilities that are important for education. So you need to be able to plan a lesson, you need to be able to ask good questions and provide good feedback and check the students’ work, among many other kinds of things about metacognition and active engagement. So if we really manage to figure out the tutoring use case, then the resulting kind of underlying model Gemini can then be used for all these other tasks. A great way to bring one single goal to optimize for that can then result in broader benefits for learning.

Michael Horn: Super interesting. Let me ask this question then. How does it integrate with Google Classroom? Because you all have this incredible install base effectively right across schools. I think probably in terms of K-12 schools, you’ll correct me if I’m wrong, but I think the largest install base of sort of learning management system instances. So how does this tutor that you’ve built, how is it integrating with Google Classroom to actually directly serve students? And what are the guardrails you’re putting around that as well?

Safety and Educational Focus

Irina Jurenka: So in terms of guardrails, I just want to say that we really take this very seriously. There’s a wide range of safety and guardrails work happening across Google DeepMind and Google at all levels from the model to the products. And Gemini in itself has a lot of safety and kind of trust and safety work going there. What our team actually does is bring an educational and learning specific angle into this Gemini model. As an example, when we try to optimize the model for tutoring, we kind of realize that a good tutor really engages the learner. They ask a lot of questions. So we brought this bias towards asking more questions to the model, but that resulted in an unintended consequence in the sense that not only does the tutor want to ask questions, it also wants to encourage questions from the student. And then a student might ask a question that’s actually harmful, so it could say something really toxic.

Ask a question about that and what the tutor would do before we did the kind of work to mitigate it. It would say, oh, that’s an amazing question. I’m really glad you’re thinking about this. And now and then it will kind of bring it back and say, actually maybe there are other things to consider here. But that initial statement was just not helpful. So we had to then go in and kind of bring extra supervised data and kind of take that unintended behavior out of the model to make sure that it’s actually safe. This additional layer of work is really important. And of course then there’s the product layers and other ways to kind of mitigate safety issues.

Diane Tavenner: That’s super interesting. I so appreciate the example. And I was gonna ask you about, you know, what is this? What are you seeing now, three years in? You know, you talked about at the beginning you had these sort of rough models and they would kind of, it sounded to me like get distracted and kind of, you know, go off. But, three years in, it sounds like you’re learning a lot of things and so you’re iterating. So what is it looking like now? And how are you feeling about the learning that’s happening when young people are engaging or people are engaging with the products now? And then maybe we can talk about where you think it’s going as well.

Gemini: Guided Learning Experience

Irina Jurenka: The work that we’ve done is on the Gemini model side. What we hope that comes out of it is that Gemini is useful for learning products, both for Google, but also for external parties who build ad tags, who build on top of Google. For the internal products, our team in particular really worked in collaboration with the Gemini app to bring the guided learning experience to users. We really wanted to bring an easy way for anyone out there to get kind of this more pedagogical behavior out of the models without having to engineer a very complex prompt. And so with guided learning, it’s really a one click way to get the model to act more like a tutor, so to guide you through the information rather than just give it to you as a wall of text. And we worked with learning science experts to make sure that this experience really adheres to the five kinds of learning science principles that we have identified as important. Again, our hope is that this actually helps students internalize the information much better. And we are working kind of very closely to try to measure the efficacy of how well that’s actually coming together.

But what I want to give you is a personal anecdote on how I ended up trying guided learning. And I was actually in Stanford a few months ago and I saw this statue of the Burghers of Calais, which I’ve never heard of that story before, so I was curious to learn more. So first I kind of just pulled out my phone and used Gemini to ask about this historical event. And it just gave me kind of the standard answer of kind of a longish response. Think of it as like a Wikipedia kind of type answer. So I read through it, it was interesting, but I realized I’m actually personally really bad with history, in fact. So I realized that that information went kind of into my head and immediately left it, and I didn’t remember anything. About 10 minutes later, I was trying to tell the story to somebody else, and I realized I don’t remember anything.

So I again pulled out Gemini, but this time I switched on guided learning to see how different the experience would be. The difference is like guided learning doesn’t just give you the answer. It kind of engages you in a dialogue. It brings you in over kind of maybe five to ten turns of conversation. It kind of walked me through the same information, but this time I realized actually that I remembered it like a week later. I could still remember the facts. I remember the interesting things it brought in. It kind of brought the connection to the War of the Roses, which the first article didn’t bring in, just because of how I selected kind of the options of where my curiosity led me, to me, it was very visceral how I tried to kind of learn the same thing from the Gemini, like a vanilla experience and guided learning. And one of them actually made me remember better without me even trying.

Diane Tavenner: Interesting.

Michael Horn: That’s really cool. One really quick question on that. Like, what are the five learning science principles that you guys have prioritized to create that sort of experience? Just so we can enumerate it?

Irina Jurenka: So the five learning science principles that we’ve identified is to inspire active learning, to manage cognitive load, to adapt to the learner, stimulate curiosity and deepen metacognition. And we realize that this is not the comprehensive list. There are other important areas of learning science that we are considering to bring to kind of forefront of what we’re optimizing for. But these are the first and the most important five learning science principles that we have been working towards so far.

Diane Tavenner: Irina, one of the things that I like about talking to you about is that you talk about pedagogy and you said up front, you know, you had this hypothesis about tutors being sort of the way to go. I’m curious about that because we’ve also talked about, there’s other kinds of ways to learn. And so I’m curious if you guys are exploring other ways and how you think about that and why tutors and yeah, anything you can share around that?

Irina Jurenka: Yes. I’m actually curious to hear from you, given your experience, what you think would be exciting, other ways of learning for us to consider. The reason why we started with AI tutoring is because of, I guess this is where the strength of current GenAI model lies. It’s kind of a text, chat based interface that we’re all familiar with. So we thought, okay, how can we leverage what’s already mature to make a difference in education? But we also realized that as new capabilities in AI are emerging and also maturing 鈥 for example 鈥 we have these demos of live experience where it’s kind of video and audio and you essentially can just talk to AI in the same way as you would talk to your human teacher. We are also thinking about how to kind of, how to bring that to users in an interesting learning experience.

But yeah, I would be very curious to hear from you what you think would be a good thing.

Learning: Content vs Skill Development

Diane Tavenner: Well, I mean I think when I think about it, and it’s hard to really parse how different this might be from a tutor, but I think about this type of learning more in kind of the factual content, vocabulary, the what I would call the content knowledge you need for learning. And then I often, you know, kind of crudely though separate from skill development. So how do you actually communicate effectively or write effectively or analyze problems? And you know, I historically have taken a project based or problem based approach to that. So you start with, kind of a big problem that you want to solve or a big question that you have and then you engage in a project that gets you an outcome or a product. And so that was pretty long winded. But maybe, maybe the most immediate would just be like, and maybe a tutor can do this, but really helping to teach someone to write effectively or communicate effectively. I think right now at least I, and I think other people are using it to just take what I’ve written and write it better. But I’m not sure that it’s really teaching me yet, giving me that guided practice and that feedback and whatnot.

So that might be the more near term version that I’m thinking about.

Irina Jurenka: Yeah. So first for the skills acquisition, we really hope that guided learning type experience could actually help with that. So your example of helping you rewrite a piece of text with guided learning, it won’t just rewrite for you, it will guide you towards how to rewrite it so that you do it. So it will ask you to think about certain things, ask you certain questions. So hopefully a student can learn from just that experience. Another thing I mentioned earlier, that kind of metacognitive abilities are important to us to kind of make sure that the tutor optimizes those things as well. That’s kind of another layer where hopefully a student will kind of be able to take a step back and understand, OK, how did I get to this? How did I rewrite this? What was important? How did I think about it so that next time they can actually almost like, guide themselves and won’t need the tutor anymore.

Diane Tavenner: That’s so interesting. Last season, Michael and I interviewed the woman who leads the Harvard Writing center, and what you just described was her concern of what was not happening and what would be missed. And so it’s interesting, the evolution, I think. I don’t know, Michael, if you’re 鈥

Michael Horn: If you’re tracking the same thing. Yeah, I think. I mean, I think it’s interesting, right? And it’s all a question of. I think. And this may be where you want to go, Diane, like, how do we put this in the hands of teachers and students, right, in productive ways so that they’re not just jumping to the shortcut, but actually engaging in the difficult learning that you all are creating these experiences for?

Diane Tavenner: I think that might be a good place for us to go and sort of, you know, bring this conversation, at least for now, to a conclusion. So, Irina, you are a new mother. And I know that when I became a mom, it changed how I viewed my work as an educator in ways that I couldn’t even have imagined. And so, you know, I’m curious, what, if anything, has changed for you in that. But even more so, what do you imagine your child’s education will be like? You know, when you think about the next 5, 10, 15, 20 years, what will it look, the same? Will it look different? What do you want for it? What do you hope for it? You know, how do you think about that?

AI, Change, and Human Connections

Irina Jurenka: That is a great question. And I have this at the top of my mind. I think we are in a very unique situation, and kind of we’re living through a very interesting period of time where the pace of change is so fast. I think even for us working in this industry, it’s kind of head spinning, and it’s even hard for us to catch up with all the progress. It’s very hard to predict where AI will be in five to 10 years, what the role of education will be. We are actively thinking about this. I think what’s becoming clear is the importance of human connections and building, kind of making sure that our next generation grows up as complete humans so that they’re not just automatons who, you know, provide prompts to AI and just kind of live in this AI driven world where AI really is still a tool that helps human flourishing and helps prove and increase human connections. So I think for my child, I would want him to still go to school and to still have experiences learning how to communicate with his peers, how to talk to his teachers and be inspired by his teachers.

I hope that AI can be something that helps him maybe learn faster and learn more and kind of really personalize his learning so that when he’s really passionate about something, he can go off and go deeper with AI and maybe be able to do these projects that are not supported at school, but he can do at home with his peers. And AI can serve as kind of this facilitator and help them again achieve more interesting outcomes with their projects. But at the end of the day, I think I want him to have the breadth of experiences and knowledge and just learn how to be a good human.

Diane Tavenner: That’s a beautiful place, I think, to wrap. 

Michael Horn: This season of Class Disrupted is sponsored by LearnerStudio, a nonprofit motivated by one question. What will young people need to be inspired and prepared to flourish in the age of AI as individuals, in careers, and for civil thriving? LearnerStudio is sponsoring this season on AI in education because in this critical moment, we need more than just hype. We need authentic conversations asking the right questions from a place of real curiosity and learning. You can learn more about LearnerStudio’s mission and the innovators who inspire them at www.learnerstudio.org.

Michael Horn: I was going to say we have this section, Irina, where we wrap up, where we share something that we’ve been reading book wise or watching on TV or movie, podcasts, whatever. And so because we didn’t prep you beforehand, we’ll let Diane go first with hers and then we’d love to hear what’s on your. What’s on your bedside table or in your ear or something like that. If you wouldn’t mind sharing.

Diane Tavenner: So, this is kind of a funny one. I’m. I’m listening to/reading a book called The Five Types of Wealth: A Transformative Guide to Design Your Dream Life by Sahil Bloom. And if you’re wondering about the types of wealth, according to Sahil, they are time, social, mental, physical and financial. And I’m actually reading this with a group of other people who are sort of in our, depending on who you talk to, last half, last quarter, quarter of life. And we’re exploring this question this year of how do I do my quote, best work in these chapters? So this is one of many things that we’re using as a prompt to sort of create a rubric for ourselves, if you will, and self evaluate.

And I’m reading it now in prep for our next get together, so fascinating. I’m not sure if I’m like a wholehearted recommendation on it, but, you know, it’s kind of, it includes a lot of the ideas that I think exist in a lot of other places and it’s a good reminder.

Michael Horn: Fair enough, fair enough. Well, it’ll get marked down either way and we’ll track it. Irina, what about you?

Irina Jurenka: So I will be honest, I am struggling to find time to read given that I have a one-year-old, but I actually did manage to get through a book recently, and it was Neil Stevenson’s novel Diamond Age. I’ve been recommended it many, many times given the work I’m doing. So I finally managed to read it. And so just if you haven’t heard about it, it’s about this world of the future where somebody designs essentially an AI tutor. So it’s kind of this book that is given to a young girl and the book essentially teaches her everything throughout her life. And I think what’s interesting, my takeaway from this was that there were three kinds of maybe original versions of the book that were given to three girls. And then they made, I guess, a copy that was given to everyone, which wasn’t as good as. And the difference was that in the three original versions there was a human who was like essentially voicing out the text to the girls.

And in the other versions it was like 100% AI. And what was interesting is that the human behind the book, even though they were just voicing what the text that the AI was producing according to the book, made a difference. Those three girls, especially the main character, who had this consistent one person who was guiding her throughout her whole life, actually built a connection with that person and grew up to be a much more successful, kind of, much better individual than anyone else. And it’s this importance of still having a human in the loop.

Michael Horn: Very cool. I love it. I’ve heard a ton about that book, so I need to add it to my list I think now. I’ll just say I’m going to shock Diane here because we always make fun of me for not being current on stuff, but I actually not only did I watch seasons one and two of the Diplomat over the summer, season three came out and I’m already done, so I’m ahead. And so I am going to stay in my Netflix binging, I guess, at the moment, but I’m feeling rather impressed with myself and that I got my Google sweatshirt from back when I lived in Silicon Valley on for this recording.

So with that, Irina, I think Diane and I could both talk to you all day and just like learn from this. So really appreciate you joining and scratching the surface with us of all the things going on at DeepMind. And for all of you tuning in, we’ll see you next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
What Does AI Readiness Mean for Schools? /article/what-does-ai-readiness-mean-for-schools/ Thu, 11 Dec 2025 20:36:22 +0000 /?post_type=article&p=1025561 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Michael and Diane sit down with Alex Kotran, founder and CEO of the AI Education Project (AIEDU), to dive into what true “AI readiness” means for today’s students, educators and schools. They explore the difference between basic AI literacy and the broader, more dynamic goal of preparing young people to thrive in a world fundamentally changed by technology. The conversation ranged from the challenges schools face in adapting assessments and teaching practices for the age of AI, to the uncertainties surrounding the future of work. The episode asks key questions about the role of education, the need for adaptable skills, and how we can collectively steer the education system toward a future where all students can benefit from the rise of AI.

Listen to the episode below. A full transcript follows.

*Correction: At 17:40, Michael attributes an idea to Andy Rotherham, The idea should have been attributed to Andy Smarick.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you as always. Looking forward to this conversation today.

AI Education and Literacy Insights

Diane Tavenner: Me, too. You know what I’m noticing, first of all, I’m loving that we’re doing a whole season on AI because I felt like the short one was really crowded. And now we get to be very expansive in our exploration, which is fun. And that means we’ve opened ourselves up. And so there’s so much going on behind the scenes of us constantly pinging each other and reading things and sending things and trying to make sense of all the noise. And just this morning, you opened it up super big. And so it works out perfectly with our guest today. So I’m very excited to be here.

Michael Horn: No, I think that’s right. And we’re having similar feelings as we go through the series. And I’m, I’m really excited for today’s guest and because I think, you know, there are a lot of headlines right now around executive actions with regards to AI or, you know, different countries making quote, unquote, bold moves, whether it’s South Korea or Singapore or China and how much they’re using AI in education or not. We’re going to learn a lot more today, I suspect, from our guest, and he’s going to help put it all in the context, hopefully, because we’ve got Alex Kotran, excuse me, joining us. He’s the founder and CEO of the AI Education Project, or AIEDU. And AIEDU is a nonprofit that is designed to make sure that every single student, not just a select few, understands and can benefit from the rise of artificial intelligence. Alex is working to build a national movement to bring AI literacy and readiness into K12 classrooms, help educators and students explore what AI means for their lives, their work, and their futures.

And so with all that, I’m really excited because, as I said, I think he’s going to shed a little bit of light on these topics for us today. I’m sure we’re only going to get to scratch the surface with him because he knows so much, but he’s really got his pulse on the currents at play with AI and education, and perhaps he can help us separate some of the hype from reality, or at least the very real questions that we ought to be asking. So, Alex, with all that said, no pressure, but welcome. We’re excited to have you.

Alex Kotran: I’ll do my best.

Michael Horn: Sounds good. Well, let’s start maybe just your personal story right into this work and what motivates you around this topic in particular, to spend your time on it.

Alex Kotran: I’ve been in the AI space for about 10 years. But you know, besides being sort of proximate to all these conversations about AI, you know, I don’t have a background in software, computer science. I don’t think I have ever written a line of code. I mean, my dad was a software engineer. He teaches CS now. No background in technology or CS, no background in education. And so I actually, I had funders ask me this when I first launched AIEDU like, well like, why are you here? Like, what’s, what’s your role in all of this? You know, my background is in really political organizing. I started my career working on a presidential campaign, went and worked for the White House for the Obama administration, doing outreach for the Affordable Care act and other stuff like Ebola and Medicare and, and then found myself in D.C.

and after I just kind of got burned out of politics for reasons people probably don’t need to hear and can completely understand. And so it wasn’t that I was so smart to like, oh, I knew AI was the next thing. I just was like, I really want to move to San Francisco. I visited there, visited the city like twice and just fell in love and sort of fell into tech and an AI company that was working in cleantech. And so I was sort of doing AI work before it was really cool. It was like back in 2015, 2016. And then I ended up getting like what at the time was a kind of a really random job that I had a lot of mentors who were like, I don’t know, Alex, like AI, like this is just like a fringe, you know, emerging technology kind of like, you know, 3D printing and VR and XR and the Metaverse, you know, is that really like what you should do? And I just had like, nah, I just want to learn.

It seems really interesting. And that’s why I joined this AI company essentially working for the family office for the CEO. It was like, sort of a hybrid family office, corporate job, doing CSR, corporate social responsibility in the legal sector. This is the first company to build AI tools for use in the law. And so I was sort of charged with how do we advance the governance of AI and sort of like the safe and ethical use of AI and the rule of law. And so I basically had a blank canvas and ended up building the world’s first AI literacy program for judges. I worked with the National Judicial College in Stanford and NYU Law, trained thousands of judges around the world in partnership, by the way, with non profits like the Future Society and organizations like UNESCO. And because my parents are educators, I, you know, and my parents are foreign immigrants as well.

And so they always ask me about my job and really trying to convince me to go back, to go to law school or get a PhD or something. And I was like, well, no, but, you know, I actually, I’m, I don’t need to go to law school. I’m actually training judges. Like, they’re, they’re coming to learn from me about this thing called AI. And my mom was like, oh, like, well, that sounds so interesting. You know, have you thought about coming, you should come to my school and teach my kids about AI. And she teaches high school math in Akron, Ohio. And I was just like, surely your kids are learning about AI.

That’s, you know, my assumption is that we’re at a minimum talking to the future workers about the future of work. I just assume that, you know, like, you know, judges who tend to be older, like, they kind of need to be caught up. And after I started looking around to see, like, is there other curriculum that I could share with my mom’s school, I found that there really wasn’t anything. And that was back in 2019. 2018/2019. So way before ChatGPT and thus AIEDU was born when I realized, OK, this doesn’t exist. This actually seems like a really big problem because even as, even as early as 2018, frankly, as early as 2013, people in the know, technologists, people in Silicon Valley, labor economists, were sounding the alarms, like, AI is, you know, automation is going to replace like tens of millions of jobs.

This is going to be one of the huge disruptors. You had the World Economic Forum talking about the fourth Industrial Revolution. Really, this wasn’t much of a secret. It was just, you know, like, esoteric and like, you know, in the realm of like certain nerdy wonky circles. And it just, there wasn’t a bridge between those, the people that were meeting at the AI conferences and the people in education. And I would really say, like, our work now is still anchored in this question of, like, how do you make sure that there is a bridge between the cutting edge of technology and the leadership and decision makers who are trying to chart a course not over the next two years, which is sort of like how a lot of, I think Silicon Valley is thinking in the sort of like, very immediate reward system where they’re just, you know, like, they’re, they’re looking at the next fundraise. But in education, you’re thinking about the next 10 years. These are huge tanker ships that we’re trying to navigate now and we’re entering.

I think this is such a trope, but, like, we are really entering uncharted waters. And so, like, steering that. That supertanker is hard and I suppose to really belabor it as maybe AIEDU is sort of like the nimble tugboat, you know, that’s trying to just sort of like, nudge everybody along and sort of like guide folks into the future. And that demands answering some of this core question of the future of work, which hopefully we’ll get some more time to talk about.

Michael Horn: Yeah, I want to, I want to move there in a moment, but I, but first, like, I maybe I don’t know that all of our audience will be caught up with all the, you know, sort of this macro environment right where. Where we sit right now in terms of the national policy, executive actions as it pertains to AI and education. They’ve probably heard about it, but don’t know what it actually means, if anything. And so maybe sort of set the scene around where we are today nationally on these actions? What if it is actually meaningful or impactful? What if it is maybe more lip service around the necessity of having the conversation rather than moving the ball, just sort of set the stage for us where we are right now.

Alex Kotran: It’s really hard to say. I mean, there’s been a lot of action at the federal level and at state levels and schools have implemented AI strategies. The education space is inundated with, like, discussion and initiatives at working groups and bills and, you know, like, pushes for, like, AI and education. I think the challenge now is, like, we really haven’t agreed on, like, to what end? Like, is this, you know, are we talking about using AI to advance education as a tool? So, like, can AI allow us to personalize learning and address learning gaps and help teachers save time, or are we talking about the future of work and how do we make sure kids are ready to thrive? And there are some that say, well, they. We just need to get them really good at using tools. Which is a conversation I literally had earlier today where there was like a college to career nonprofit and they were like, well, we’re trying to figure out what tools that help kids learn because we want them to be able to get jobs.

I think like AIEDU, like, our work is actually, we don’t build tools. We don’t even have a software engineer on our team, which we’re trying to fix, like, if there’s a funder out there that would like to help fund an engineer, we’d love to have one. But our work is really systems change. Because if you like, zoom out and like, this is, I think, where I do have this skill set. And it’s kind of like, again, it’s a bit niche.

The education system is not. It’s not one thing. It’s like, it’s sort of like an organism. The same way that like redwood trees are organisms. Like, they’re kind of all connected, the root structure. But it’s actually like you’re looking at a forest that looks very different, you know, that’s not centralized. You know, every state kind of has their own strategy. And frankly, every district, in many cases, you’re talking about, you know, in some cases, like government scale, procurement, discussion, bureaucracy involved.

Advancing AI Readiness in Education

Alex Kotran: So if you’re trying to do systems change, this is really a project of like, how do you move a really heterogeneous group of humans and different audiences and stakeholders with different motivations and different priorities? And so our work is all about, OK, like, setting a North Star for everybody, which is like defining where we’re actually trying to go, what. And we use the word AI readiness, not AI literacy. Because what we’re, what we care about is kind of irrespective of whether kids are really good at using AI. Like, are they thriving in the world? And then like, how do you get there? Like, like most of our budget goes to delivering that work, you know, doing actual services, where we’re building the human, basically building the human capital and like, the content. So like training teachers, building curriculum, adapting existing curriculum, more so than building new curriculum, but like integrating learning experiences into core subjects that build the skills that students are going to need. And those skills, by the way, are not just AI literacy, but durable skills like problem solving, communication, and core content knowledge frankly, like being able to read and write and do math, we think is actually really important still, if not more important. And then sort of the third pillar to our work is really catalyzing the ecosystem.

And because the only way to do this is by building a movement, right? Like, sure, there. There’s an opportunity for someone to build a successful nonprofit that’s delivering services today. But if you actually want to change the world and really solve this problem on the timescale required, you have to somehow rally the entire, there’s like a million K12 nonprofits. We need all of them. This is like an all hands on deck moment. And so our organization is really obsessed with, like, how do we stay small and almost like operate as the intel inside to empower, like, the existing nonprofits so that they don’t have to all pivot and, like, become AI because, like, there’s just not enough AI experts to go around. If every school and every nonprofit wanted to hire an AI transformation officer.

Like, there just wouldn’t be enough people for them to hire.

Diane Tavenner: Yeah, they’re still trying to hire a good tech lead in schools. We’re definitely not getting an AI expert in every school soon. So you’re, you’re speaking my language, you know, sort of change management, vision, leadership 101, etc. I’m wondering, you know, sort of not necessarily the place we were thinking we’d go in this conversation, but I think it’d be fun to go, like, really deep for a moment that I think is related to your North Star comment. What does school look like in the age of AI? When kids are flourishing, when young people are flourishing, and when they’re successfully launching? I think that’s what the North Star has to describe.

And you just started naming a whole bunch of things that are still important in school, which feel very familiar to me. They’re all parts of the schools that I’ve built and designed and whatnot. And so I think one of the interesting things is maybe we’ll then build back up to policy and whatnot. But, like, what does it look like if we succeed, if there is this national movement, we’re successful. We have schools or whatever they are that are enabling young people to flourish. What do you think that that looks like?

Alex Kotran: Yeah, this is the question of our day. Right. I mean, I think this is where, I mean, just to go back to this, like, state of play. I think, like, we’re kind of. It’s very clear that we are in the age of AI, right? This is no longer some future state. And frankly, like, ignore all the talk about AI bubbles because it kind of doesn’t matter. I mean, there was, there was like, there’s always a bubble. There was a bubble when we had railroads.

There was a bubble when we had, like, in the oil boom. There was a bubble with the Internet. You know, there probably will be some kind of a bubble with AI, but that’s kind of like part and parcel with transformational technologies. Nobody who’s really spent time digging these technologies believes that there’s not going to be AI sort of totally proliferated throughout our work in society in like, 10 years, which is, again, the timeframe that we’re thinking about. The key question is, though, like, what is it? Like, what does it mean to thrive? And so there’s more than just getting a job. But I think most people would admit that, like, having a job is really important. So maybe we start there and we can also talk about, you know, the, the social, emotional components of just sort of like, being able, being resilient to some of like, the onslaught of synthetic media and like, AI companions as other stuff. One of, if not the most important thing is, like, how do you get a job and like, have like, you know, be able to support yourself and, and that question is really unanswered right now.

Uncertainty in AI and Future Jobs

Alex Kotran: And so everybody in the education system is trying to figure out, like, well, what is our strategy? But we don’t know where we’re going? Like, we really do not know what the jobs of the future are. And like, I’ve, like, you hear platitudes like, well, it’s not that AI is going to take your job, it’s that somebody using AI is going to take your job. Which is a kind of a dumb thing to say because it’s, it’s correct. I mean, it’s like, it’s like, basically like, okay, either AI is going to do all the jobs, which I don’t like, like, that actually may happen, some people say, sooner than later. I just assume it’s going to be a long, long time if it ever, if we ever get there. And so until we get there, that means that there are humans doing jobs and AI and technology doing other aspects of work. So, like, what are the humans doing is really the important question. Not just like, are they using AI? But like, how are they using AI? How aren’t they using AI? Until we get more fidelity about what the future of work looks like, what are the skills you should be teaching? Because, like, you know, like, I think a lot about, like, cell phones.

And you go back to 2005 and you can imagine a conversation where it’s like, and all this is completely true, right? In 2005, it would be correct to say that, you know, you will not be able to get a job if you don’t know how to use a cell phone. You will be using a cell phone every single day, whether you’re a plumber or a mathematician or an engineer or an astrophysicist. And yet I think most of us would agree that, like, we shouldn’t have, like, totally pivoted education to focus on, like, cell phone literacy because, like, nobody’s going to hire you because you know how to use a phone and AI like, probably is going to some degree get there. I mean, it’s already sort of there, right? Like, sure, there are people who will charge you money to teach you prompt engineering, but you could also just open up Gemini and say, help me write a prompt. Here’s what I want to do. And it will basically tell you how to do it.

Diane Tavenner: I mean, we. You’ve seen this. You might not be old enough to remember this, but I was a teacher when everyone thought it was a really good idea to teach keyboarding in school. It’s like a class. What we discovered is actually if you just have people using technology, they learn how to use the keyboard. Right? Like, it happens in the natural course of things and you don’t have a class for it. So what I hear you saying is like, your approach is not about this sort of, you know, there’s some finite set of information or skill, you know, not even skills in many ways that we’re going to teach kids. But it’s like, what does it look like to have them ready for the world that honestly is here to today and then keeps evolving and changing over the next 10 years? And so where to even go with that, Michael because.

Michael Horn: I mean, part of me wonders, Alex, like, if I start to name the things that remain relevant, what, like, maybe the conversation to have is like, what’s less relevant in your view, based on what the world of work and society is going to look like?

What’s the stuff that we do today that you know, will feel quaint? Right, that we should be pruning from?

Diane Tavenner: Yeah, cursive handwriting. That is still hotly debated by, by the way.

Alex Kotran: But, you know, although you get like Deerfield Prep and they’re going back to pen and paper.

Michael Horn: Right. So that, I mean, that’s kind of where I’m curious. Like, what practices would you lean into? What would you pull away from? Because, I mean, that’s part of the debate as well. Like our friend Andy Rotherham, I believe at the time we’re recording it, just had a post around how it’s time for a, you know, a pause on AI in all schools. Right. Not sure that’s possible for a variety of reasons. But, like, what would you pull back on? What would you lean into? What would you stop doing that’s in schools today, as you think about that readiness for the world that will be here in your, we’re all guessing, but 10 years from now.

Alex Kotran: Now, what to pull back on? I mean, look, take home essays are dead. Don’t assign take-home essays like the detectors are imperfect. It’s like, and as a teacher, do you really want to be like an, you know, a cyber forensics specialist? Like that’s not the right use of your time. And also you’re using AI. So it’s a bit weird to the dissonance of like, oh, like empowering teachers with AI, but then like, we need to prevent kids from using it. But I think they’re like low hanging fruit. Like, OK, don’t assign take-home essays.

The way to abstract, that is students are. You can call it cheating, let’s just call it shortcuts. What we do need to do is figure out, OK, how can AI, how is AI being used as a shortcut? And whether you ban it in schools, kids are going to use it out of school. And so teachers need to figure out how to create assessments and homework and projects that design such that you can’t just use AI as a shortcut. And there’s like, this is a whole separate conversation. But just like to give one example, having students demonstrate learning by coming into the class and presenting and importantly having to answer questions in real time about a topic. You can use all the AI you want, but if you’re going to be on the spot and you don’t understand whatever the thing is that you’re presenting about and you’re being asked questions like, you know, that’s the kind of thing where sure, use all the AI. If it’s helpful, you might just.

But ultimately you just need to learn the thing. But like the more important question is like, I don’t know if school changes as much as people might think. I think it does change. I think there’s a lot that we know needs to change that is kind of irrespective of AI. Like we need learning to be more engaging. We need more project based learning. We need to shift away from just sort of like pure content knowledge, memorization. But that’s not necessarily new or novel because of AI.

I think it is more urgent than ever before.

Michael Horn: I’m curious, like what’s. Because I do think this is also hotly debated, right? Like in terms of the role of knowledge and being able to develop skills and things of that nature. And so I’m just sort of curious, like what’s the thin layer of knowledge you think we need to have? Or, or like Steven Pinker’s phrase, common knowledge Right

And what’s the stuff we don’t have? Like we don’t have to memorize state capitals, right? Maybe.

Diane Tavenner: No. Yeah, I don’t think we need to memorize the state capital, because, yeah, but keep going.

Michael Horn: Yeah, yeah, I’m curious now. It’s like, right, like as we think about, because we do have this powerful assistant serving us now and we think about what that means for work. And I, but I guess I’m just curious, like, what does that really mean in terms of that balance, right? Like, what is all knowledge learned through the project or this, you know, how do we think about, you know, and it’s a lot of just in time learning perhaps, which is more motivating. I’m curious, like, how you think about that.

Alex Kotran: I think this needs to be like, backed by, like research, right? Like, sure, it probably is, right, that you don’t need to memorize all the state capitals. But then I think you, you start to get to a place where like, OK, well, but do you even need to learn math? Because AI is really good at math and I think math is actually a good analog because I don’t really use math very much or I use relatively simplistic math day to day. I, I think it was really valuable for me to like, have spent the time building computational thinking skills and logic. And also just math was really hard for me and it was challenging. And like the process of learning a new abstract, hard thing. I do use that skill, even some of the rote memorization stuff. You know, my brother went to med school and like they spent a lot of time just memorizing like completely just like every tiny aspect of the human body.

They like have to learn it. It’s actually like, I think doctors are really interesting, a great way to kind of double click on this because if doctors don’t go through all of that and don’t understand the body and go through all of the rote process of literally taking like thousand question tests where they have to know like random things about blood vessels. And even if they’re never going to deal with that specific aspect of the human body, doctors kind of like build this sort of like generalized set of knowledge and then also they spend all this time like interacting with real world cases. And you, you start to build instincts based on that and, and you talk to hospitals about like, oh, what about, you know, AI to help with diagnosis? And one of the things I hear a lot of is, well, we’re worried about doctors losing the capacity to be a check on the AI because ultimately we hear a lot about the human in the loop. The human in the loop is only relevant if they understand the thing that they’re looped into. So, yeah, so like, I don’t know, I mean, maybe we.

Diane Tavenner: Yeah, you’re onto something. You’re spurring something for me that I, I actually think is the new thing to do and haven’t been doing and aren’t talking about. And that is this, let me see if I can describe it as I’m understanding it, unfold the way you’re talking about it. So I had a reaction to the idea of memorizing the state capitals because memorizing them is pretty old school, right? It calls back to a time where you aren’t going to be able to go get your encyclopedia off the shelf and look up the capitals. Like you have to have that working knowledge in your mind, if you will, to have any sense of geography and, you know, whatever you might be doing. And it was pretty binary.

Like it really wasn’t easy to access knowledge like that. So you really did have to like memorize these things. Math, multiplication tables get cited often and whatnot for fluency in thinking and whatnot. So I don’t think that goes away. But it’s different because we have such easy access to AI and so there isn’t this like dependency on, you’re the only source of that knowledge, otherwise you’re not going to be able to go get it. But it doesn’t take away the need to have that working understanding of the world and so many things in order to do the heavier lifting thinking that we’re talking about and the big skills. And I think that, I don’t think there’s a lot of research on that in between pieces, like, how do you teach for that level of knowledge acquisition and internalization and whatnot? And how do you then have a, you know, a more seamless integration with the use of that knowledge in the age of AI when it’s so easily accessible? So that feels like a really interesting frontier to me. That doesn’t look exactly the same as what we’ve been doing, but isn’t totally in a different world either.

It is restricted, responsive and reflective of the technology we have and how it will get used now.

Rethinking Assessments and Learning Strategies

Alex Kotran: Yeah, it’s, it’s a helpful push because like, what I’m not saying is that I know everything in school is fine. I don’t think I’ve ever talked to a superintendent who would say, oh, I’m feeling good about our assessment strategy. Like, we’ve known that and because really what you’re describing is assessments like what, like what are we assessing in terms of knowledge, which becomes the driver and incentive structure for teachers to like, you know, because to your point. Are you spending five weeks just memorizing capitals or are you spending two weeks and then also then saying, OK, now that you’ve learned that, I want you to actually apply that knowledge and like come up with a political campaign for governor of, you know, a state that you learned about and like, tell us about like why you’re going to be picking those. You know, tell us about your campaign platform. Right. And you know, like, how is it connected to what you learned about the geography of that state? So it’s like adapting, integrating project based learning and more engaging and relevant learning experiences. And then like the mix and the balance of what, what’s happening in the classroom is sort of, and this is the, the challenging thing because it’s like the assessments will inform that, but it’s also there the assessments are downstream of sort of like it’s not just about getting the assessments right, but it’s like, why are we assessing these things? And so that you very quickly get to like, well like, what is the future of work? And because like, yeah, I mean like, you probably don’t need to learn the Dewey Decimal system anymore.

Even though being able to navigate knowledge is maybe one of the most important things, certainly something I use every day.

Diane Tavenner: One of the things we tend to do in US Education, Alex, is be so US centric and we forget that other people on the planet might be grappling with some of these things. I know you track a lot of what happens around the globe. What can we look at as models or interesting, you know, experiments or explorations. Everything from like big system change work, which I know we have different systems across the world, so that’s different. It’s a little bit, it’s not groundswell, it’s a top down but like anything from policy, big system all the way down to like who, who might be doing interesting things in the classroom. Where are you looking for inspiration or models across the globe?

Alex Kotran: I mean, South Korea is a really interesting case study. You mentioned South Korea. I think at the beginning of this, during the intro they were just in headlines because they had done this big push. They would like roll out personalized learning nationwide. And then they announced that they were rolling back or sort of slowing down or pausing on the strategy. I forget if it was a rollback or a pause, but they’re basically like, wait, this isn’t working. And what they found is that they hadn’t made a requisite investment in the teacher capacity. And that was clear.

And so part of the reason I’m tracking that is because I don’t know that there’s very much for us to learn from what any school is doing right now, beyond, like, there’s a lot for us to learn in the sense of like, how can we empower teacher, like, how do we empower teachers to run with this stuff? Because they are doing that. You know, like, I think there’s a lot to learn from a, like a mechanical standpoint of like, implementation strategies. But I don’t know that anybody has figured this out because like, nobody can yet describe what the future of work looks like. And I know this because the AI companies can’t even describe what the future of work looks like. You know, you had like Dario Amodei at Anthropic seven months ago, saying in six months, 90% of code is going to be written by AI, which is not the case. Not even close.

Diane Tavenner: And Amazon’s going to lay off 30,000 white collar workers this week,

Alex Kotran: Which they did.. Yes. And so you have. But is that really because of AI or is that because of overhiring from interest rates? I mean there’s like, so, so until we answer this question of like, what is like. And really the way to say what is the future of work is like, to put it in educational terms, how are you going to add value to the labor market? Like, David Otter has this like, example which I think is really important. It’s like, you know, the crosswalk coordinator versus the air traffic controller. And then, like, we pay the air traffic controller four times as much because any one of us could go, be a crosswalk coordinator like today, just give us a vest and a stop sign. I don’t, I assume you’re not moonlighting as an air traffic controller. I’m certainly not.

It would take us, I think, I don’t know what the process is, but I think years to acquire the expertise. And so there is this barrier of expertise to do certain things. And what AI will do is lower the barriers to entry for certain types of expertise, things like writing, things like math. And so in those environments where AI is increasingly going to be automating certain types of expertise, then, well, for people to still get wages that are good or to be employed, they have to be adding something additional. And so the question of like, what are the humans adding? Again, we get to stuff like durable skills. We get to stuff like a human in the loop. But I think it’s much more nuanced than that. And the reason I know that is because there’s the MIT study.

I think it was a survey, but let’s call it a study. I think they called it a study. So there’s a study from MIT that found that 95% of businesses, AI implementations failed, have not been successful. So really what we’re seeing is, yes, AI is blowing up, but for the most part, most organizations have not actually cracked the code on like, how to like, unlock productivity and like. And so I think that there’s actually quite a lot of business change management and organizational change that’s coming. And so actually kind of trying to hone in on what does that look like, I think is maybe the key, because that will take 10 years if you look at computers. Computers, like, could have revolutionized businesses long before, but they ended up getting adopted. I mean, it took like decades actually for, you know, spreadsheets and things like that to become ubiquitous.

And like Excel is a great example of something. I was just talking to this, this expert from the mobile industry who was talking about, like, the interesting thing about spreadsheets was it didn’t just automate because there were people who literally would hand write, you know, ledgers before Excel. And so obviously that work got automated. But the other thing that spreadsheets did, where they created a new category of work, which is like the business analysts, because. Because before spreadsheets there was really the only way to get that information was to like, call somebody and sort of like compile it manually. And now you had a new way to look at information which actually unlocked a new sort of function that didn’t exist. And that meant, like, businesses now have teams of people that are like, doing layers of analysis that they didn’t realize that they could do before. And so

Diane Tavenner: I wonder, what you’re saying is sparking two things for me. And again, we could talk probably all day, but we don’t have all day. So sadly, I think this might be bringing us to a close here for the moment. But I’m curious what both of you think on this because you brought up air traffic controllers. And in my new life and work, I’m very obsessed with careers and how people get into them and whatnot. I’ve done deep dives on air traffic controllers. And it’s, my macro point here is going to be.

I do wonder if this moment of AI is also just extreme, exposing existing challenges and problems and bringing them to the forefront. Because let me be clear, training air traffic controllers in the US was a massive problem before AI came around, before any of this happened. It’s a really messed up system. It is so constrained. It’s not set up for success. Like, it’s just such a disaster and a mess and it’s such a critical role that we have. And it’s probably going to change with AI. Like, so you’ve just got all these things going on.

And I’m wondering, Michael, from your perspective, is that what happens in these, you know, moments of disruption and is that all predictable and how do we get out of it? And then, Alex, you’re talking about. I was having a conversation this morning about this idea that all these companies no longer are hiring sort of those entry level analysts, or they’re hiring far fewer of them. And my wondering is no one can seem to answer this question yet. Great. Where’s your manager coming from? Because if you don’t employ any people at that level and they haven’t sort of learned the business and learned things, what do you think they’re just sitting on the sidelines for seven, eight years and then they’re ready to slide in there into, you know, the roles that you are keeping? And so are these just problems that already existed that are now just being exposed, you know, what’s going on? What do you all think?

Job Market Trends and AI

Alex Kotran: So, first of all, we really don’t know if the, like, I’m not convinced that the reason that there’s high unemployment among college grads is because of AI. I mean, I think there was overhiring because of interest, low interest rates. I think that companies are trying to free up cash flow to pay for the inference costs of these tools. And, and I think in general, like, you know, we’re, there’s going to be like, sort of like boom, bust cycles in terms of hiring in general. And we’ve been in a really good period of high employment for a long time. I think what, what is clear is if you talk to like earlier stage companies, you know, I was talking to a friend of mine at Cursor, which is like one of the big vibe coding companies, like blowing up, worth lots and lots of money. And I asked them about, like, oh, like I keep hearing about like, you know, companies aren’t hiring entry level engineers anymore because like, you’re better off having someone with experience.

And he’s like, all of our engineers are in like their early 20s. Huh. OK, that’s interesting. Well, yeah, because actually it’s a lot faster and easier to train somebody who’s an AI native who learned software engineering while vibe coding. But he’s like, but we’re a small organization that’s like basically building out our structure as we go so we don’t have to like operate within sort of like the confines. I think there’s going to be this idea of like incumbent organizations. They have the existing hierarchy because ultimately you’re looking for people who are like really fast learners who can like learn new technology, who are adaptable and who are good at like doing hard stuff. If you’re a small organization, you’re probably better off just like hiring young people that like, you know, have those instincts.

If you’re a large organization, what you might do is just maybe you’re laying off some of the really slow movers and then retaining and promoting the people that are already in place and have those characteristics. And then your point about like training the next generation, like law firms are thinking about this a lot because like you could, maybe you could automate all the entry level associates, but you do need a pipeline. But then you get to do you need middle managers? I mean like if the business models are less hierarchical because you just don’t need all those layers, then maybe you don’t worry so much about whether you need middle management and it’s more about do you need more. I think what companies are going to realize is they actually need more systems thinkers and technology native employees that are integrated into other verticals of knowledge work that outside of tech. So like, if you think about marketing and like business and customer success and you know, like non profit world fundraising and policy analysts, like all of these teams that generally have like people from the humanities. You know, I think companies are going to say, OK, how do we actually get people that like can do some vibe coding and have a little bit of like CS chops to build out some, you know, much more efficient and productive ways for these teams to operate. But like nobody knows. Nobody knows.

I don’t know. Michael?

Michael Horn: I love this point, Alex, where you’re ending and that like, and I like the humility frankly in a lot of the guests that we’ve had around. This is like the honesty that we’re all guessing a little bit at this future and we’re looking at different signals right. As we do. I think my quick take off this and I’ll try to give my version of it, I guess is you mentioned David Otter earlier at mit, Alex. Right. And part of his contention is that actually, right, it levels expertise between jobs that we’ve paid a lot for and jobs that we haven’t and more people like, as opposed to technology that is increasing inequality. This may be a technology that actually decreases inequality. And I guess it goes to my second thing, Diane, around what the question you asked and air traffic control training is a great example.

But like, fundamentally, the organizations and processes we have in place have a very scarcity mindset. And I suspect they’re going to fight change and we’re going to need new disruptive organizations, similar to what Alex was just saying, that look very differently to come in. And it gets to a little bit of, I think what everyone says with technology, like the short term predictions are huge. They tend to disappoint on that. The long term change is bigger than we can imagine. And I guess I kind of wonder is the long term change what we. Alex, earlier on this season we had Reed Hastings and you know, he has a very abundant sort of society mindset where the robots plus AI plus probably quantum computing, like, are doing a lot of the things, or is it frankly sort of what you or I think Paul LeBlanc would argue, which is that a lot of these things that require trust and we want people like, yes, you can build an AI that does fundraising for you. But like, do I really trust both sides of that equation? I’d rather interact with someone.

Right. There’s a lot of social capital that sort of greases these wheels ultimately in society. And I guess that’s a bit of the question. And Diane, I guess part of me thinks, you know, Carlota Perez, who’s written about technology revolutions, right. She says that there will be some very uncomfortable parts of this, right. And a bit of upheaval. Part of me keeps wondering if we can grease the wheels for new orgs to come in organically, can we avoid some of that upheaval because they’ll actually more naturally move to paying people for these jobs in a more organic way.

And I, right now we have a, I’m not sure we have that mindset in place. That’s a bit of my question.

Diane Tavenner: More questions than answers. More questions than answers. Really. This has been, wow, really provocative.

Michael Horn: Yeah. So let’s, let’s, let’s leave. We could go on for a while. Let’s leave the conversation here for the moment. Alex, A segment we have on the show as we wrap up always is things we’re reading, watching, listening to either inside work or we try to be outside of work. You know, podcasts, TV shows, movies, books, whatever it might be. What’s on your night table or in your ear or in front of your eyes right now that you might share with us.

Alex Kotran: I’m reading a book about salt. It’s called Salt.

Michael Horn: This came out a few years ago. Yeah. Yeah. My wife read it.

Alex Kotran: Yeah, I’m actually reading it for the second time. But it is, you know, it’s interesting because we. It’s something that’s, like, now you take for granted. But, you know, there’s a time when, you know, wars were fought. You know, it sort of spurred entire new sorts of technologies around. Like, the Erie Canal was basically, you know, like, salt was a big component of, you know, why we even built the Erie Canal. It’s. It’s actually nicknamed a ditch that salt built, you know, spurring new mining techniques.

Technology’s Interconnected Conversation

Alex Kotran: And, you know, I just find it fascinating that, like, you know, there are these, like, technology is so interconnected not to bring it back. I know this is supposed to be outside, but all I read, I only read nonfiction, so it’s going to be connected in some way. I just, like, fascinated by, like, you know, there are these sort of, like, layers behind the scenes that we sometimes take for granted that, you know, can actually be, like, you know, quietly, you know, monumental. I think what’s cool about this moment with technology is it’s like everybody’s a part of this conversation. Like, before, it was, like, much more cloistered. And so I think that’s just, like, good. Even though, yes, there’s a lot of noise and hype and, you know, snake oil and all that stuff, but I think in general, like, we are better off by, like, having folks like you, like, asking folk, asking people for, like, you know, like, driving conversation about this and not just leaving it to a small group of experts to dictate.

Diane Tavenner: So I think this is cheating, but I’ve done this one before. But I’m gonna cheat anyway because, as you know, Michael, because you hear me talk about it a lot, the. The one news source I religiously read is called Tangle News. It’s a newsletter now and a podcast. It’s grown like crazy since I first started listening. I love it. It’s like a startup.

It started, I think when I started reading, it was like, under 50,000 subscribers or something. Now up half a million. Executive editor, Isaac Saul, who I’m going to say this about a news person I trust, which I think is just a miracle. And I’m bringing it up this week because he wrote a piece last Friday that, honestly, I had to break over a couple days because it was really brutal to read. That’s just a very honest accounting of where we are in this moment. The best piece I’ve heard, I’ve read or, or heard about it. And then on Monday, he did another piece where, you know, they do what’s the left saying? What’s the right saying? What’s his take? You know, what are dissenting opinions? I just love the format. I love what they’re doing.

I was getting ready to write them a thank you note slash love letter, which I do periodically. And I thought I’d just say it on here.

Michael Horn: I was gonna say now you can just excerpt this and send them a video clip.

Diane Tavenner: So I hope, I hope people will check it out. I love, love, love the work they’re doing, and I think you will too.

Michael Horn: I’m gonna go historical fiction. Diane, I’m like, surprising you multiple weeks in a row here, I think. Right? Yeah. Because, Alex, I’m like you. I’m normally just nonfiction all the time, but I don’t know. Tracy said you have to read this book, Brother’s Keeper by Julie Lee.

It’s based on. It’s historical fiction based on a. About a family’s migration from North Korea to South Korea during the Korean War. It is a tear jerker. I was crying like, literally sobbing as I was reading last night. And Tracy was like, you OK? And I was like, I think I won’t get through the book. But I did, and it’s fantastic.

So we’ll leave it there. But, Alex, huge thanks. You spurred a great conversation. Looking forward to picking up a bunch of these strands as we continue. And for all you listening again, keep the comments, questions coming. It’s spurring us to think through different aspects of this and invite other guests who have good answers or at least the right questions and signals we ought to be paying attention to. So we’ll see you next time on Class Disrupted.

]]>
Netflix’s Reed Hastings on the Impact of AI on Schools /article/netflixs-reed-hastings-on-the-impact-of-ai-on-schools/ Thu, 20 Nov 2025 17:30:00 +0000 /?post_type=article&p=1023701 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Diane Tavenner and Michael Horn dive into the impact of AI on schools with guest Reed Hastings, founder of Netflix and dedicated education advocate. The conversation explores Hastings’ pragmatic optimism about AI’s potential to individualize learning, reshape the roles of teachers, and revolutionize assessment practices. Hastings shares his belief that while AI will transform many aspects of education, it鈥檚 crucial for schools to nurture citizenship, social-emotional skills and a foundation of knowledge independent of technology. The episode also touches on future models for schools, equity in an AI-driven future, and practical examples of how AI is currently enhancing reading and math instruction.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, this is Diane, and I appreciate you joining Michael and I for season seven of Class Disrupted, as we’re doing a deep dive into education in the age of AI. I think you might really enjoy this episode where we got a chance to talk with Reed Hastings, founder and longtime CEO of Netflix and a longtime education supporter. We talked to him about what he’s seeing, hearing, and thinking about AI and education. And I think coming out of the conversation, I would call Reed a pragmatic AI optimist. I think that’s the best description I have for him. So much of our conversation is about what is possible today, as well as where we’re heading in the future and what that might mean for the work we’re all doing and how we’re trying to think about it. I find Reed’s thinking to be both clarifying and provocative at the same time, which makes for a really fun conversation. I really hope you will enjoy it.

Diane Tavenner: And please keep sending questions, thoughts, inspirations for where we go as this season unfolds. Thanks so much. 

Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you as always, I’m looking forward to today’s conversation in particular.

Diane Tavenner: I think this one’s going to be really fun. I am prepping for this. I asked Chat to do a little bit of a intro for Reed Hastings and, I know and that’s what I said to myself. I said, you know, I can definitely write this, but it’s out of habit now.

And nonetheless, there were some interesting things in there. So, for those of you who don’t know, our guest today is Reed Hastings, and according to Chat, you are a tech entrepreneur, investor, and educator with a long track record of innovation and social impact. That all tracked for me. It went on to basically describe two parallel careers, which also tracks for me on the business side, most notably co-founder, longtime CEO, and most recently, executive chair at Netflix. And on the other side, where I have spent my time with you is deeply involved in education, ranging from chairing the California State Board of Ed, along with a number of education nonprofits, supporting tons of educational entrepreneurs and charter schools and educational efforts. Here was the fun part for me. I think a lot of people don’t know that you started out in the Peace Corps and that you actually taught for two years in Africa. And the pieces that I forgot were you taught math and that your undergrad degrees in math and you have a master’s in computer science.

And I forgot that because I know you’re technical, but that’s not where we spend our time, we mostly spend our time on leadership and people development and growth. And so that was kind of a fun reminder. All of that to say as we start this season, really exploring the intersection of AI and education, we knew we had to talk to you, so welcome Reed. We’re grateful to have you here.

Reed Hastings: How fun. Michael and Diane. Well, Chat got it mostly right, but actually I taught maths, so.

Michael Horn: That’s right, because in Africa they would say maths.

Reed Hastings: Yeah, teaching O level. It’s pronounced in the plural. I never did figure out why. And then relevant to this conversation, in the mid-80s, I got a master’s degree from Stanford in AI. And now the AI of the time turned out to not work at all. So it’s a mixed utility, but it certainly has been a long term passion.

Michael Horn: Well, it’s part of that arc, I guess, right, of AI dating back decades and decades or at least the thinking and study and hypothesizing about. And now this moment, moments arriving, really. And so I’d actually love to start there with you. Really big sort of wide angle view on this because you, you know a lot, you do a lot, you talk to a lot of people. What excites you at the moment about this new age of AI in education?

AI’s Impact: Exciting and Uncertain

Reed Hastings: Well, there’s a lot, I think both to be excited and concerned about, you know, to the degree that the bulldozer, you know, mechanized labor and digging, we’re seeing the potential of AI of doing that for thinking, it doesn’t do it for feeling. So for, you know, our, a lot of what we think of as core humanity, as our, our feelings, but in terms of thinking, you know, it’s on track to be better than us at basically all aspects of thinking. And then there’s a debate about whether that’s two years or 20 years, but you know, that’s sort of in the noise compared to it happening. So we’re living in pretty dramatic times, which is exciting. But, you know, it’s not clear that it will turn out for the benefit of humans. We’ll see.

Michael Horn: Yeah, and maybe that’s. That was sort of the second thing on my mind of the worries you pieces of it. And so I maybe just like the quick narrative in your head of what are the signposts you’d be looking for if this is going in the positive direction or the negative direction. Like what are the things that you’re paying attention to to sort of help us understand where this is headed?

Reed Hastings: Well, if you look at industrialization, you know, which brought, you know, mass products and mass wealth. There were a lot of ups and downs. So I’m pretty sure you’re not going to be able to judge it in the near term as to like, which vector did it take? It’ll sort of take both in some senses. There’ll be parts of it that are really good and parts of it that are worrisome. But how it plays out over 100 years is extremely hard to tell. And I don’t think there’s much, you know, because of the global competition. It’s not like slowing it down is really a possibility. So instead we have to lean in and then try to channel it to make it as positive as we can.

Diane Tavenner: Along those lines, we are most concerned with and care about education. And so, you know, you’ve spent decades working on education from multiple angles. I thought it would be helpful because I think this time calls all of us to rethink. What do we think the purpose of schools is or what are they here to do? And has it or will your vision of that change with AI?

Education’s Dual Mission and Technology

Reed Hastings: Well, big picture, the vision of schools is to create both great citizens for the society and then give the individual great opportunity. So it’s sort of always had this dual mission. I don’t think that that changes. I think probably what changes is more and more software based teaching that’s individualized and infinitely patient and gracious. And 15 years ago I bought Dreambox Learning and invested a lot of money into helping Dreambox grow. And the CEO, Jessie Woolley-Wilson gets most of the credit. But I bring it up just because it’s evidence of sort of being interested in how software could improve learning for a long time. And that’s now one product.

But mostly students are going to ChatGPT instead of specialty applications. And so whether that’s Khan Academy, of which I’m a board member of, or others, people are learning that, you know, the AI chat is a very broad and useful tutor. So if you need some help in physics, that’s the first place you go. If you need to plan travel, if you want to ask a boy out, you know, it’s like wide ranging, you know, counseling. I mean, you know, it’s already there for younger people and they’re using it, you know, in huge numbers.

Diane Tavenner: Yeah, that’s. It’s interesting because we’ll get more into, I think some specifics there. But how do you, how do you think about it with your other hats on in terms of where our K-12 graduates are going, the world they’re entering from, you know, a workforce, career opportunity, perspective. I know. I mean, I’m not sure anyone actually knows what’s going to happen, but there are certain things happening now. And what do you see coming on that front? Does that shift or help us rethink anything we might be doing at the high school level, the college level?

Reed Hastings: Well, let’s see. I mean, AI has two broad effects. One I’ll call why bother? Which is why are we training kids in biology and in writing an essay? And they’ll like, it’s sort of like slide rule skills or square root skills. So that’s one theory. And then the other is how exciting we’re going to be able to use AI so kids learn twice as much, you know, by age 16 they’re at a college level and you can be incredibly ambitious about what it’s possible to learn. And so, you know, those are the kind of two forces. And you can think about chess a little bit.

So computers have been better than all humans at chess for 20 years now. And so you would have thought, well, what’s the point of playing? You can never be better than the computer. And yet the number of chess players and most significantly the quality you know, of, say the average 10 year old, 15 year old, 20 year old playing chess is much higher. And that’s because AI has been training them. So now as a 14-year old, you can play against these great AI chess tutors. And so you get much more scale to the teaching of chess and much more practice. And yet we’re still excited to see two humans compete. It’s like robotics is exciting.

But, you know, I think we’re going to watch humans play basketball, not watch robots play basketball, partially because we are human. So think of it as, you know, the good scenario is AI produces such a bounty that our societies become very wealthy, people work less, but that they actually learn more, which is sort of the chess path. And that they know an incredible amount about a wide range of things humans do and they’re learning for the pleasure of it as opposed to the economics. So, you know, there’s no real economics in chess. And so, but again, people are playing more and playing better. So, you know, maybe that’s our future for biology and history and all kinds of other things.

Diane Tavenner: Well, I was thinking so, you know, in my, at least in my experience over the last decades, you’ve generally focused on educational policy and governance and new school models, leadership, maybe with Dreambox as a notable exception. Less on sort of pedagogy and the practice in schools. But recently I’ve experienced you moving more in that direction. First of all, am I getting that right? And second, you know, what’s driving that? What do you think is most promising and why that seems to me to be kind of sparking your interest in those, how we actually do school.

Rethinking Schools and Teachers’ Roles

Reed Hastings: Yeah, I would say it’s accurate to say I’ve mostly been a governance person, which is how do we create organizations where teachers can thrive and build the public schools of their dreams? A little more entrepreneurial approach and choice sort of in markets, essentially being the fundamental driver of enabling innovation. Still a big believer in all of that. I think there’s an opportunity for some schools to rethink the schooling model. And we’ve had, you know, for economic reasons, we’ve had 300 years of, you know, 20 to 50 kids in a classroom and a teacher, the sage on a stage, gets up and imparts wisdom. And I think that it’s going to be better to have individualized software, but it’s almost always going to be in a school setting. So, you know, parents are working, homeschooling will be, you know, 3 to 10%. But the vast majority of people want the custodial function also of schools. But in schools, I think the teacher’s role is going to move more towards a social worker focusing on social emotional learning and discussion and you know, the mere imparting of facts, you know, what was the history of the Roman Empire or how to do fractions, those kinds of things will be software.

And so for teachers it’s a huge change in their self image which has always included social emotional learning and discussion, but was still based upon sage on a stage. And that’s a pretty deeply embedded paradigm. And so trying to figure out what are the schools of the future where you know, most of the fact base is building. And of course then the software advantage is that it’s one to one and so it’s really focused at the level that the kid is at and from that you get much more learning and engagement. But then recognizing that parents and kids want more out of school than learning the facts. And there’s this incredible role to focus on around social emotional development, how to work with other human beings that I think teachers will be able to focus on. So you know, I would say that’s going to be a multi-decade change as the software gets better. And so it’s trying to see, you know, what can some schools do that really pioneer that at the same time, the software has to continue to get better.

Michael Horn: I mean, just. Sorry to cut off there, but it’s interesting because, like, implicit in that is some pretty structural, big structural changes both to schooling operations, teacher identity and roles, processes, things of that nature. You’ve sort of been a student, if you will, of how schools do and don’t work and the systems themselves. I’m curious, like, how you see that change management playing out over the next two, five, ten years. We know schools are often very good at blocking change at the classroom door, if you will. And so I’m sort of curious, do you think this is an entrepreneurial pathway? Do you see change in the schools? Is this a both and? How does this come about, in your view?

Reed Hastings: Yeah, I mean, broadly, from a governance standpoint, we have a set of local monopolies. Public schools that provide services and monopolies can do terrible things. And so to control them, we pass regulations. And the only thing worse than the regulated monopoly that we have would be an unregulated monopoly. But as long as you’ve got that monopoly structure, you need a lot of regulations. And they work in the short term to ensure that kids get opportunity, but they become very rigid. And so it’s extremely hard for the regulated monopoly public school system to adopt significant changes. And thus we see sort of the stability over, you know, 200 years of the model.

So it’s going to be quite a change to get the regulated system to be able to open up and allow a lot of change. So hopefully the unregulated side, or less regulated, which is private schools and charter public schools, will have some running room to prove out how much better individualized instruction is for the student and how the teacher’s role really becomes very exciting about kind of talking through things, both in small groups and large groups, leading discussion, and then really getting to know the kids in the social, emotional, learning aspect and help them work well with other human beings. So I think it’s a time of invention. And, you know, most charter schools, so I’m on the board of KIPP and have been for 20 years, are like, let’s do the classic model better. So let’s work hard. Let’s have classes on Saturday. Let’s have longer school day. Okay? And, you know, probably Eva Moskowitz and Success Academy represent the pinnacle of that, which is an unbelievably excellent classic school model.

And I think we need to keep those going. And there’ll be another set of entrepreneurs that figure out how to do a school where the effective class size in the student teacher ratio might be higher. So you could basically do 40 to 1, but with a lot of software have the results of like a one to one teacher model and that’s what will make the economics work of paying for the technology. And again, even at 40 to 1, that means the teacher could spend an hour with each kid on social emotional learning. If it was just one hour per kid, you know, it’s probably not divided up that way, but you know, it could be so, but an hour per week of, you know, personal coaching and you know, helping them grow as a, as a human being. So there’s again lots of opportunity. And then of course Alpha school is pioneering a variant of it where the payoff is two hour schooling. And so their fundamental insight is school is a lot more than learning facts and we’ll spend two hours doing that, but then the kids get to do all the school stuff the rest of the day.

Whether that’s in some places it’s mostly sports and in some places it’s all kinds of other things. But I’ll call it enriching activity beyond the classic curriculum. And so they’ve taken a very fresh and interesting tack which is about the shortest school day possible from the classic learning standpoint and that’s the prize and incentive they serve a high end demographic. So it’s hard to say what the replication of that will be, but it’s very provocative and interesting as an example of a big picture innovation which is the two hour, you know, classic curriculum learning day and then four more hours of learning public speaking or learning community development or learning how to use AI. So that’s a great sign of sort of the innovation ahead for us in the K12.

AI, Learning, and Evolving Careers

Michael Horn: So just staying on that for a moment because I’m coming back repeatedly in my head to your chess point around AI and sort of how it produced a legion of players who are more intrinsically interested perhaps in pushing themselves right in chess and so forth. And part of that is also an on demand learning piece of the element like I am learning just in time almost right as it comes. There’s relevance in my life, maybe on the passion projects I’m pursuing or things of that. And like I have a thirst for more knowledge to up, you know, my skill set or what I’m able to create and do build whatever it might be. And I’m just sort of curious how you see that maybe against the backdrop of how careers might change as well. Sort of how much do you think is you need basic narratives around history and science and human progression, perhaps just so people have like a, and Diane, you probably have a view on this too, frankly, like a progression rooted or stories right around the society we’re in and things of that nature that’s maybe narrow and thin, but an overview and then a lot of sort of on demand as you need it, exploration based on these projects and periods of passion that can actually drive some of the AI knowledge building, if you will. I’m curious, Reed, if you have a take on that.

And Diane, you may end up having to as well.

Reed Hastings: You see, there’s a lot that’s wrapped in your question about future careers and jobs. And then, you know, let’s take the case that AI gets stronger and stronger to the degree that a given society abandoned schooling and just says waste of time. The danger is that citizens learn everything as they need from AI. And if the AI tells them that the world’s going to end or the AI tells them that, you know, such and such is the best leader, we’re creating, you know, a lot of sheep as human beings in that society that kind of abandoned education. So I think if you think of the historic role of education as one part for citizen and one part for employee or economic actor, the economic actor part will become less relevant because the jobs are different, that kind of thing. But schools creating a human narrative, a narrative of the country, a caring of your fellow citizens and some stable part of fact base so that you’re not, the citizen is not totally reliant on AI for what’s true. It will be very important, I think.

So again, the role of creating future citizens I think becomes a more important role relative to history of school. And then the economic actor part is tough because what are we going to do better than the AIs of 20 years from now? You know, it’s really hard to see what those roles are. And we’re in the first wave of AI now, where it’s trapped in the phone and in the laptop. So it can only do some things. It can design you a house and architecture, you know, and it can write a contract. So lawyers are under threat, but there’s a lot of real world activity. But the low cost humanoid Android robots that we’re likely to get is sort of the second wave of AI, you know, and at first they’re really great because they do things around the house for you and it’s sort of a Roomba vacuum on steroids because it walks around and cooks scrambled eggs and cleans the house and, you know, does all those things but then it’s like in the Starbucks and then they’re flying the planes and then, you know, it’s like basically every job. So, you know, again, hard to see how AI in combination with Android robotics become, you know, something other than really replacing our economic functions.

And that could make for, well, almost surely will make for a very rich society. How those riches are spread throughout the society is unclear because that’s a political process. And will our political process distribute those amazing gains of this new technology in a way that’s cohesive so we avoid the French Revolution? That’s an open question. Because if the inequality gets too extreme, you know, you get a French Revolution situation. So we’ll have to see the political processes of distributing the great gains that AI will provide. And they’ll provide gains. You’ll be able to get much better medical care much cheaper, the diagnosis, the intervention, all those things. So just think if the AI doctors are really good and you can get in easily and see we have all these expertises in medicine and specialties because no human brain can be great at brain and foot disease.

I mean, you know, podiatry. So, you know, that’s why it’s carved up into all these areas. But AI will be great at all of them. So when you consult with an AI doctor, you won’t have to wait to go see the specialist. You know, just, I mean that alone you’ll save enormous amounts of time and have better outcomes.

Diane Tavenner: Our last conversation was with Tom Lee, who’s the founder of One Medical and Galileo, who’s like doing exactly what you’re talking about.

Reed Hastings: Exactly. So again, AI is such a big factor that its impact on K12 is kind of like 5% of the total picture. Now it’s the 5% that three of us are really going to try to land and do well. But you have to think, I think of AI as a once in a thousand, ten thousand, one hundred thousand year change to society to pioneering this thing. And for the next 50 years we’ll be in the middle of it.

Diane Tavenner: Yeah, it’s going to be maybe turbulent, but fun ride for the next 50 years. I think the only thing I would add, Michael, is I keep going back to, I think it’s more important than ever for humans to know themselves and for us to do that work on who we are and what we think and what we care about. And so I see that as the opportunity and the need. Reed, a couple more questions to take this big to something specific you’ve been thinking a lot about assessment and the potential for AI in assessment and how we use assessment in schools. And so I’m curious for you to unpack a little bit. Like, what are the challenges you’ve identified around assessment and the opportunities and how can you imagine AI or see it right now impacting how we think about it? Assessment is a huge part of education.

Reed Hastings: Yeah, it’s a. I mean, it’s one tactical part of what AI can do. So, you know, there’s a number of problems in current assessment in terms of cost, in terms of balancing formative and summative, in terms of building confidence in the parents and the citizens who don’t take the tests. And the way we do the things is the tests have to stay pretty secure, so then we can’t really share copies and you can’t take them multiple times. And you know, what you’d like to have is something where the AI was interviewing you as a student and assessing your knowledge like a human would of, Okay, let’s talk about historical antecedents to this, or let’s depending on the level.

Open Learning Assessment Vision

Reed Hastings: Let’s talk about this biology program and you know, basically probe and clarify things and question and then come up with a ranking which is a little. What happens in chess with, you know, it’s a narrower domain, but you get a chess score and then as you get better and age up your score increases. And you can think of that as, you know, ultimately we’d like to have an assessment system that was open and free and you could go to, you know, what do I know.org, OK, and everyone agrees that’s the standard. And what do I know.org it does a broad range of assessment and you could assess yourself every week if you want, and schools would assess you every now and then. Parents could assess themselves and their kid 鈥 and politicians. And it’s all open and free as to what do kids know and maybe even part of it there’s a chess strand and you get your chess score, you know, that’s built into that, which is, you know, how good.

You know, you play nine games and you sort of see how good is your chess level. So I think that will really disrupt the current assessment industry. And eventually some states will want to save money and they’ll say, okay, instead of spending all this contract money, what if we just use “what do I know.org鈥 and you know, and that’ll be a proxy. So I think because of cost savings that will come in and be quite practical. So, you know, there’s I think a whole bunch of companies working on assessment and then the trick for them is is that going to be the winning strategy or do you just wait for ChatGPT 7? And in ChatGPT 7 you say assess my knowledge on a scale of one to a thousand for math or for overall. And it just does it, you know.

You know, you’ve got a couple different approaches to what that’s going to come about. But think of it as computer based assessment will lead to much more understanding and accuracy and guidance and have a big impact on the current testing in terms of parents having more confidence, it being both helpful, i.e.: formative as well as summative. And so just incredible amounts of positive change in that area paralleling the different approaches for teaching in the software.

Michael Horn: Reed, just a couple more questions as we start to wrap up here. And one of them you talked about Dreambox, learning math and so forth. The flip side of this is we’ve seen a lot in recent years a big push toward teaching reading in alignment with the evidence around what’s called the science of reading. And there often seems to be a growing number of folks right now who are using AI in meaningful ways with regards to teaching reading specifically. I’m curious what you’re most excited about on that front and what’s grabbed your imagination or attention.

AI Advancing Language and Learning

Reed Hastings: Well, I’m tracking two companies, Amira and Ello in that and what they’re getting good at is phonemic processing. So you know, the four to six year old is a struggling reader and then they the AI listens and is able to process the sounds and then help the student sound out the words and think of it as sound out the words is the science of reading and sort of, you know, grounded in that phonemic translation. But AI will also be used to help people speak English whether they grew up with it or not. So I would say for, you know, for speaking English, ie knowing vocabulary and sentence construction and you know, at the high end in the U.S. people, you know, hire Chinese or Spanish nannies so their kids learn a second language. And so you can think of AI as just like the nanny that will teach you, especially you know, where you learn language so easily biologically because you’ve got an AI tutor helping you with reading, helping you with, well for that matter, math, et cetera. So first phase you’ll see a bunch of AI companies do like reading apps or this app or a math app and then it will just be a learning app, right? And the market will consolidate and parents will say, okay, what do I want to do? And then the open question is, will those app companies continue to add enough value first using Claude or ChatGPT or any of the, or Gemini directly..

You may in the future, to Gemini, be able to say, teach my kid to read or I want to read. And like, you don’t need all these separate apps, right? It’s just like the general thing that it’s one of the 19 things it does. It can also teach you physics and it can teach your kid reading. But right now they’re moving so fast that only a few companies are focused on the particular types of phonemes and the typical reading problems that different kids have. So that I’ll call that a specialized audio processing challenge with specialized training that the big companies are moving so fast they’re not, you know, really trying to focus on that. So I think for, for a while, for five or 10 years, there’s a market for independent products and that. So that’ll be very exciting.

Diane Tavenner: Michael and I, along those lines. We all are learners, we’re all lifelong learners, you always have amazing recommendations. Reed, what have you been reading, watching, listening to that we should know about that is capturing your imagination lately?

Reed Hastings: Interesting. I’m listening to Tony Fauci’s memoir and you know, he’s a very, you know, he’s 80 something years old now, retired, and you know, he’s worked his life in public health and some part of the citizenry reveres him and some part hates him. And you know, he’s become a sort of a test for many things. And so it’s a fascinating life when he’s just was a kid trying to become a doctor, trying to become a health professional, you know, to serve the world as best he could. And yet he was thrust in this amazing stew, both of HIV for a very long time and then of COVID So I think it’s a, I’m only partway through the memoir, so. But I love that kind of very honest because I think he is being honest on his reflections and he reads it in his voice. So that’s always cool.

Diane Tavenner: That’s awesome. That’s awesome.

Michael Horn: What’s on your list, Diane? You got to share now.

Diane Tavenner: Well, I got one this weekend. I’m curious if you guys have listened to this yet. It’s pretty new. It’s a podcast. I listened to the first three episodes called the Last Invention. And it’s the story of basically this kind of 70 year quest to get to this moment where we are in AI and if you want to call it the AI revolution, I think it’s a. It starts with a slightly sensational opening, but I think that’s what you have to do in podcasts these days. But the history is engaging.

It feels well told, feels relevant to me and provides a lot of useful context. And as you know, one of my kiddos is deep in this and it’s helping me understand a lot of things that he says and what he’s read in, in a way that a layperson can understand. So I’m enjoying it.

Michael Horn: Very cool. My other takeaway from that is that we need a more sensationalistic hook on our podcast. I’ll go shameless pandering here just because Reed, you’re our guest, but I’ll say K Pop Demon Hunters. We watched it though during the two day limited theatrical release. So several, you know, weeks before we’ve recorded this. But the reason I bring it up actually is twofold. One, we are still like it is very present in my kids lives every single day, right. And like to the point where Reed, I have twin girls and they fight on, you know, the way to something that I’m driving them on.

And my answer now is to turn on a song like Golden like that from K Pop Demon Hunters just to get peace for five minutes in the car. And it’s incredibly effective. But the second reason so my wife is Korean American and she tells the story that when she was in grade school her teacher said here’s a map, you know, fill out where different countries are. And the teacher mislabeled Korea, you know, had it in completely the wrong place. And my wife had this big argument with her and now sort of the unexpected twists and turns of cultural global influence has Korea squarely in the pop limelight even as it has its own demographic challenges right now. So it’s just a fascinating sort of twist and turn through that’s been that shamelessly pandering here. But it felt like a good one and maybe a little bit more light hearted, Diane from me.

Reed Hastings: And that’s a, that’s a great one of human connection that was minimal AI creation, you know, all humans. And for every great hit we have like that, we have three or four that don’t hit and we’re still not sure why. And it’s great to see K-Pop Demon Hunters crossover. And so, you know, 65 year olds like me can watch it multiple times and kind of get a little more each time out of it. So it’s got a Star Wars or Shrek kind of multi-layered aspect that really make it part of the cultural landscape. And it is amazing that because the Internet is so global, Netflix can be so global. And so we can recruit and develop the best talent, you know, whether that’s in Korea, you know, or Poland or Brazil or Hollywood, you know, or Kansas. So it’s been great of the sort of Internet explosion of creativity that it participates in that.

Michael Horn: It’s phenomenal and a good way to speak to our feelings, which you lead off with Reed. So huge thanks for joining us on this episode of Class Disrupted. And for all of you joining us, we’ll see you next time. Keep the comments, keep the questions coming. It is driving a lot of our thinking. We know that. And we’ll see you all next time on Class Disrupted.

This episode is sponsored by LearnerStudio.

]]>
From Education to Anthropic: What Impact Will AI Have on Learning? /article/from-education-to-anthropic-what-impact-will-ai-have-on-learning/ Wed, 05 Nov 2025 17:30:00 +0000 /?post_type=article&p=1022914 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Hosts Michael Horn and Diane Tavener sit down with Neerav Kingsland, a longtime education leader now working at AI safety and research firm , to explore the evolving intersection of artificial intelligence and education. Neerav shares his journey from working in New Orleans鈥 public school reform to his current role at a large AI company. The conversation covers the promise of AI tutors and teacher support tools, the key role of application “wrappers” for safe and effective student interaction with AI, and the need for humility and caution, especially with young learners. The episode also delves into the broader societal impacts of AI, the future evolution of schools, and the increasing importance of experimentation and risk-taking for students navigating an uncertain, tech-driven landscape.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Listen to the episode below. A full transcript follows.

Michael Horn: Hi, it’s Michael. What you’re about to hear is a conversation that Diane Tavenner and I had with Neerav Kingsland, longtime person in the education world who’s now at Anthropic, one of the major companies behind the large language models 鈥 of course Claude being theirs. And I had several takeaways from this conversation, but I just wanted to highlight a few for you. First was Neerav’s humility in constantly saying we don’t know the answer to the full impact of AI on education, let alone society, and just how honest that felt. Second, I was struck by how much he sees AI tutors as being a major use case for the technology, and he referenced things like Amira or Ello as perhaps examples of where this could be going. Third, teacher support was something he named, whether it be for efficiency gains or to help with facilitation and the like. Fourth, I was struck by how he repeatedly emphasized the importance for caution when it comes to young children interacting directly with AI, particularly the large language models themselves, and his belief as a result that wrappers, essentially applications, if you will, application layers, will be a critical part of how young people interact with AI, both to build in more content expertise, more scaffolding, but also the protection from AI, perhaps, itself.

And then finally, the last thing I’ll leave you with was when we asked him what, perhaps, would be most valued in the years ahead for schools, he said something that is perhaps undervalued today and that is radio risk taking. And that’s something that certainly landed for me. So I hope you enjoyed this conversation with Neerav Kingsland, and we’ll talk to you soon on Class Disrupted.

AI’s Role in Education Trends

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you and excited to get into this conversation that we’ve been teasing our audience with in the opening episode around AI. And then we had a few weeks to get our guests lined up. And I think, as today’s conversation will show, it has been well worth the wait, I suspect. But there are a lot of developments, obviously, AI to large companies, constantly making some exciting updates, rolling out new applications and features and the like. And so you and I have been constantly updating our own thinking, emailing back and forth a lot, and I think today is going to be really exciting to continue to update our thinking.

Diane Tavenner: Yeah, I agree. I have conversations regularly with people who listen, who say, you know, this is the dialogue we want to have about AI and education. And honestly, I can’t think of a better person I’d like to be talking about this topic with. Our guest today is Neerav Kingsland. And Neerav is someone Michael and I have both known for many, many years. And the reason why is he’s worked in New Orleans in post Katrina days helping to build the nation’s first public school system there, where over 80%, 90% of the students attend charter schools. He served as the CEO of New Schools for New Orleans and then in a variety of philanthropic roles with the Arnold foundation and Reed Hastings and a managing partner at the City Fund. And then Neerav made this big jump a few years ago and joined Anthropic, which is of course one of the handful of leading foundational AI companies known for its large language model Claude, and he leads strategy there. So with education and AI sort of covered, Neerav, it was hard for us to imagine someone better positioned to come and open this season and talk to us about the big picture of AI and education. And so welcome. We’re really happy to have you here.

Neerav Kingsland: So thrilled to be here. Thanks, Diane.

Michael Horn: No, well, so, Neerav, I want to start with this because I’d love to just understand your pathway from education to Anthropic. And I’ll say up front, Diane may already know some of this, but I don’t. On your LinkedIn, it looks like you effectively left education and moved hook, line and sinker, if you will, into one of the leaders in AI. So I would love to just understand what is, you know, what led to the move. What does your day job look like these days? Is education still present in it?

Just help us understand the pathway.

Neerav Kingsland: Yeah, totally. So I had been following and reading about AI since my time in New Orleans. The book that really hooked me was The Singularity is Near, the Ray Kurzweil book, which is 25 years old now, but pretty prescient. I think he predicted AGI in like 2033 or something. And here we are. And so I think that opened my eyes to the possibility I wasn’t technical enough to know how right he might be, but kind of big if true. After you, you know, you read a book like that and then, you know, as a layperson, just kept on reading, listening to podcasts, blogs and so forth. And then it was really when GPT2 came out, so kind of, you know, maybe 15.

Michael Horn: You were earlier than us.

Neerav Kingsland: Yeah, only because I was like, trying to write poetry with it and I was like, oh, my gosh, like, this is pretty good. Like, we might be knocking on the door. And so, you know, I just started thinking like this, you know, these ideas and this technology could be the biggest thing to ever happen to humanity. And we might be getting pretty close. And so I started thinking very seriously about a career change there, and the transition was a little more gradual. I reached out to Open Philanthropies. I knew the leader, a guy named Holden there who ran that foundation, that’s Dustin Moskovitz foundation, and just asked if there was anything I can do. I knew they did a lot of AI safety work, and in a cool way, they had a lot of young founders, and I, at that point, was a little older, so it scaled nonprofit and philanthropic work.

So I became an executive coach, just kind of an advisor to some AI safety founders, and did that on the side for about a year and a half. So I got to know the field, got to know a lot of amazing people, and eventually paths crossed with the Anthropic folks. And, I was wowed by their mission and the team, and so joined about three years ago now. It was before ChatGPT, so it was really a small research org when I joined. And then, you know, the rest is history.

Michael Horn: It’s such an interesting trajectory. It’s such a cool example, frankly, of putting yourself in the middle of something. Right. To make that sort of a switch. How does it connect? Like, does it feel like you’re leaving education in some ways, or does this feel like some other way of framing it in terms of, you know, your own purpose, life, work, the arc of the things that you’ve done in terms of impact on humanity? I just love to get that insight.

Neerav Kingsland: Yeah, I’m still very involved in education. I’m on the board at City Fund. There’s a new leader there, Marlon Marshall, who’s absolutely fantastic, but so stay connected through that. And then my first couple years at Anthropic, we were mostly just trying to stay alive. And I didn’t have much to contribute on research, so I was doing business, sales, BD fundraising, and did that for about two and a half years. So I went from an education nonprofit to, like, SaaS salesperson for two or three years, which is great. I learned a lot, and, you know, very important, obviously, for a company to succeed. And then about a year ago, our CEO Dario, wrote this piece called , which I’d highly recommend, and set forth kind of a positive vision for AI and society.

And at that point, we were a little more stable on revenue, and so I and a couple others kind of raised our hands to go create an org within Anthropic our unit called Mission Labs. And so that’s actually where I sit now, where we incubate projects that can help AI do good in the world. And so I’ve done some education work, helped get our life sciences kind of drug discovery work going. I’m working on cyber defense now. I can go into more details on many of that. But through that I just feel insanely fortunate to sit in both at Anthropic and then a part of the org that’s mission is to incubate projects to do good with AI.

Michael Horn: That’s fascinating. That’s really neat and great of Anthropic to create a division that’s focused on all those questions as it emerges. And we’ll make sure to link also to that letter in the show notes because I think that’s an important one for the audience to have the context. Just one more question before Diane, you can jump in there. But I like, I’m curious. We’re getting all these hot takes right now that AI is going to radically transform education. AI is going to be the worst thing to ever hit education or maybe incremental at best to, you know, it actually obliterates the purpose of education itself in some pretty significant ways.

Give us sort of your headline of where you sit on that continuum and you can provide the nuance I just gave you the headlines to navigate.

Neerav Kingsland: Yeah, maybe fortunately, unfortunately, any of those headlines could end up being true. And you know, we can see what we can do to get to the good outcomes. I think maybe let me start more at the micro of education as it exists today, then we can zoom out a bit. In most ways this is the most optimistic I’ve been about education, in the 15 to 20 years I’ve been working in the field. The two things that I am really, really thrilled about are AI tutors.

I have a four-year-old and a six-year-old. I experiment on them all the time and I’ve just been wowed. My time in New Orleans was early days in the edtech space and the products were pretty nascent. It wasn’t a huge part of our strategy. But now I couldn’t imagine running a school where that wasn’t a pretty key part of what you were thinking about. And the AI tutors, specifically teaching kids to read programs like Amira and Ello I think are very strong for elementary math. I have my daughter in a program called Super Teacher, which I think is wonderful. And then as you get up, I think there’s just more and more in the high school college, there’s a group called Study Fetch that builds on top of us that we’re thrilled by.

AI Tutors and Teacher Support

Neerav Kingsland: So it just feels like the AI tutors are going to happen. They’ll likely be very impactful and we’ll get fairly close to the dream of scaling a high quality, one on one instruction for at least an hour or two a day for every kid. The other thing I’m super excited about is AI teacher support, both in the efficiency sense of lesson planning, but more in classroom facilitation. So you guys might have seen Course Mojo, which Eric and Dacia founded, where you basically combine AI giving live feedback in a classroom, that information going back to the teacher, the teacher then being able to modify their instruction and how they’re facilitating the class. And that all just seems pretty magical to me. And so very excited about that as well. The things I’m worried about are: you can cheat with AI. Obviously we’ve seen that happen.

So it can make you dumber. You know, Anthropic intentionally doesn’t. I think it’s actually against our terms of policy to be a child and use our product. And so we really want there to be an app layer on top of us that is shaping the experience for a kid so we can push it in the right direction. And then zooming out, like, where is this all heading? You know, I think the greatest opportunity is that we have a chance to flourish. We can choose the jobs we want, the education paths we want, and you can imagine a much better world than the grind a lot of the world has to be in today. I do think there’s a real threat. There’s a phrase like intellectual pieces coming out on gradual disempowerment, which I’d encourage your readers to get familiar with. It’s basically the idea of the more you hand off to the AI, the more you might hand off your intellectual and emotional maturity and humans could get disempowered.

And so I think staying on the good side of that is obviously very, very important. So all that’s to say, I agree with all the headlines and the future is, you know, up to us in some way.

Diane Tavenner: That was awesome. Super helpful and I’m. There’s like 10 different directions we could go with that right now. I think one of the things that Michael and I have noticed is that it feels like that across the board, education gets used as sort of a use case and a case study for how AI will be applied far more than it normally does, you know, in technology and we’re not used to this. We’re not used to sort of being at the center of the conversation and what’s happening. And so that’s been a really interesting idea for us to grapple with. One of the things you said there, Neerav was like, and I think this might be helpful to dig in for people, is that you don’t expect young people kind of under 18, K-12 to be engaging directly with Claude. You expect there to be sort of this app layer on top of it and you named a variety of different programs.

And so I’d love to unpack that a little bit more because I don’t think most people think about that. I think they think AI is literally this, this dialogue box. And it’s just you go back and forth, back and forth. And we’re really trying to uncover, you know, what does it actually, you know, when you put that app layer on top, how does that, how do people engage with that and what is, what does that do? Especially for young people who don’t have skills yet and don’t have experience and don’t have knowledge. It’s very different when the three of us are using a dialogue box than with someone who hasn’t really built their, you know, whatever they might be, analytical skills, argumentative skills, their, their expertise. And so let’s dig in a little bit. Like, talk to us about like, who builds on top of you and how does that happen and what does that look like?

Neerav Kingsland: Yeah, I mean, just to start from a values perspective. We need to be careful with kids. Yeah. As we’ve seen with social media, gaming, whatever, whenever there’s new technology, you don’t know how it’ll affect kids and this technology, particularly when they’re basically talking to a human-like figure that is increasingly more and more intelligent. Yeah. Our brains weren’t hardwired for that and kids need to be supported in how they use AI. So I think that’s just like our starting point. It’s early days.

Let’s not make dumb mistakes that we’ll look back on and regret. At the same time, let’s figure out ways to give kids access to this technology so they can benefit from it. So, to start, maybe at the extreme example, if you talk to Claude or you talk to ChatGPT or Gemini and you’re four years old, you’re not going to learn how to read. Like it’s not gonna happen. But if you use Ello or Amira, like, you know, with my daughter, when she was about 6 I started using Ello with her and I was like, pretty convinced that if she just did that for 20 minutes a day, she would learn how to read, which was like, just spectacular that I really didn’t think she needed much human tutoring to learn how to read, given that app. And you can imagine how many grids across the world that would just be game changing for. The only piece that was interesting, I think, gets into the future of schooling is there was no way she was going to do 20 minutes of that app without me sitting beside her. So I think historically, when I think back on my times in New Orleans, very often tech was used as a kind of a babysitter to allow the teacher to do small group instruction. And so a big curiosity for me, and I know, Diane, you’re a pioneer here, is how to get these tools into the school in a way where the teacher feels accountable for what’s happening and that the culture of the school is motivating the kid to get through that.

Transforming Education with Tech Innovation

Neerav Kingsland: And I know groups like Alpha School are thinking a lot about the cultural piece now, but yeah, just the idea that if school was set up to really maximize the interaction here with the app layer, we could have, you know, amazing gains, I really do think. But yeah, that’s the short of it is I, I don’t think typing into a box for maybe, you know, kids under 18 is just great pedagogy. It’s so much more you can do and we’re thrilled to be doing it. So maybe one last thing on the app layer, when I took over this role in the Mission labs, because I knew education was a place I thought we could start. So I just did a sprint and probably over two months met with 40 or 50 ed tech companies, philanthropists, VCs, to see what was out there. And then kind of informally, we just started working with 10 or 15 of them and giving them the same technical support we’d give to like the Fortune 500, but, you know, more out of a mission perspective. And so through that, we’ve got to start building with a lot of the app layer companies that have just been wild.

Diane Tavenner: That’s pretty awesome. What does that look like when you build or work with them? I mean, I, again, I think people have no idea what this would even be.

Michael Horn: You know, well, and just to stand that right, Diana? I think a lot of people say, well, like, why doesn’t Anthropic just do it all? Like, why does, why do we even need the apps that are from third party companies? Right?

Neerav Kingsland: Totally. You know, Michael, to your question. I think most domains right now to really understand like the person on the other end, in this case children and their needs, like you need domain expertise. You know, maybe one day like Claude out of the box will know everything but it doesn’t right now it doesn’t know how to be a great teacher the way you know, educators building apps would. And so we don’t feel it’s ready to do it all, particularly in education and then in what we do it’s kind of like forward deployed engineering. So we take a technical person in our team who’s an expert at building on top of Claude and we just, you know, we’ll do an intake meeting where we try to understand their overall mission of the org we’re working with and then their product roadmap, what they want AI to be able to do and where they’re struggling. Then we just dig in with them very tactically. It might be like a shared Slack channel, a weekly meeting and we try to get whatever they’re building out to launch. We’ll stick with them until that happens.

Diane Tavenner: That’s awesome. Let’s shift to older young people if you will. I think, you know, I’m now really focused on the successful launch of young people post high school into whatever their post secondary pathway is and into their first foothold job and careers and life. And I think that your CEO has been one of the first and few people to be really honest about maybe the short sort of medium-ish term impacts potentially on careers, especially for young people. And I think we’re seeing some data and statistics that suggest that, you know, recent college graduates are struggling to find first jobs and AI might be an impact there. And clearly there’s complicating factors around the economy and whatnot. But I think if we look back in history it’s logical to assume with such a seismic transformation that we will see, you know, many jobs go away and new jobs will be created. But there might be some, you know, gaps and timeline where that’s, that’s going to be a little bit rough.

Like how do you think about that? How do you think we should be thinking about that? How does that influence what you think maybe we should be focused on in high school and post secondary as we, for those of us serving, you know, directly serving kids.

Neerav Kingsland: Yeah, you know, I think at Anthropic we just try to be open and honest about what we’re seeing and where the text going and you know, ultimately we’re not policymakers and so we want to inform the people who, both citizens and government who are making this maybe to like zoom out a little bit to your point. And we’ve been through these transitions before. We have. And you know, I think exactly to what you said, they can be painful while they’re happening, even if you end up in a better place on the other side. But the last big one we went through was farming to the Industrial revolution. And then, you know, coinciding with that was basically the falling of the monarchies across Europe. And then we went on 150 year exploration to kind of get to capitalistic welfare, democratic systems at least in Europe and the U.S.

and you know, a couple world wars in between. And so it was extremely, you know, a tumultuous time. And you know, whatever happens in this transition, I hope it happens much more peacefully. And I think, you know, we have the lessons of history now and maybe a way we didn’t back then. And so all that’s to say, I think just setting the stage, you’re absolutely right. And big changes are likely afoot. In terms of what that means right now, I find that to be a very confusing question that I personally don’t feel like I have good answers to. And you know, I find that I live kind of in two worlds.

One, when I show up at Anthropic every day and then I go home and like teach my kids to read or whatever. And I don’t quite know how to put those two worlds together sometimes. So I think the short answer is I really don’t know. Like if I was a kid in college or you know, what would I do differently? It’s very hard to know. I’ll give like a take because I’m on a podcast, but this is low confidence. I think things that I’ve been thinking about, you know, for my own kids on some level is experimentation and risk taking. I think we’re probably already undervalued in school relative to just like grinding and taking a test and so forth. And so I think that’ll be even more important during a time of transition because the paths will be less structured and we’ll just know less and so trying failing.

You know, the more you can do that, the earlier in life probably the better. Then another thing I’ve been curious about is the ability to manage AIs as basically small teams could be a very important thing to, you know, managing teams is a very important skill, obviously as we all grow through. And you know, when you look at business schools now, they’ve really restructured around doing work in teams. And so I have been curious about what does it mean to have a team of AIs working for you and how should that affect, like, high school, college, grad school and early employment?

Diane Tavenner: It was just so fascinating to me as someone who has been pretty fanatical about leadership development and management development and tried to move, when we’re thinking about humans in that regard, to a much more sort of collaborative approach to leadership and management. Now I think about AIs, I’m like, well, I think we might be going back the other direction. I’m not sure you take that collaborative human approach right?

I think you take a more sort of classic management approach. So maybe what’s old will be new again.

Neerav Kingsland: I always say please when I’m asking Claude for things. Err on the side of seeing the good side.

Navigating AI in Education

Michael Horn: But I appreciate your honesty, Neerav, and like, sort of. There’s a lot we don’t know right now around this. I want to stay on the question of maybe the here and now with the older side of the young people, as Diane phrased it, just because you’re seeing a lot of professors, you mentioned cheating, for example. You’re seeing a lot of professors return to the blue book, oral exams, things of that nature and stuff like that. And I guess on the one hand I get it, and on the other hand it feels to me like maybe we’re not asking people to do the right things. Like we need an update on the purpose of what they’re actually doing in the work so we can see how do they use AI with the knowledge and skills that they’re building to do something more than they could have before. And I’m sort of doing like, I, I just love you to sort of think through that puzzle out loud with us about how you’re framing those sort of two dichotomies of approaches.

Neerav Kingsland: Yeah, I mean, I was talking to a couple education philanthropists or was over email, I think. And I said there’s never been a greater time and a more exciting time to be an educational entrepreneur and to go create a school. I think for these reasons, whether that’s a higher ed, high school or whatever, like what an amazing time to go build a school. And so for all your listeners, I hope there’s people out there who are doing some of the best work in the world that you can do. So generally, and this is what kind of the ethos of New Orleans is a lot of trial and error and trying to figure out what works and what doesn’t. Just needs to happen broadly across the country right now. And so I think my short answer to that is I hope a lot of people try things and we learn. That being said, we obviously have existing institutions.

I wouldn’t want that to be like all my exams, but I do think having kids writing classes, you know, in this transition is probably a pretty good thing to do. I also think it raises a bunch of questions about, like, how well are we doing on education if all these kids are just cheating?

I think I’m pretty sympathetic to the bluebook thing. I think that’s probably what I would do for like a certain type of. I wouldn’t want that to be all my exams, but I do think having kids write in class in this transition is probably a pretty good thing to do. I also think it raises a bunch of questions about how well are we doing on education if all these kids are just cheating?

Michael Horn: Do we have the incentive structure toward encouraging risk taking?

Neerav Kingsland: Yeah.

Michael Horn: Yeah.

Neerav Kingsland: Or like, I mean, you know, they’re kids and so, you know, 18’s not totally kids. But it worries me that for whatever reason, not necessarily the kid’s fault, they don’t value the learning in of themselves. And that could be because they’re getting taught the wrong thing because it’s hard and we’re all lazy or whatever. But cheating is also a sign of people not valuing the work. And so that does raise larger questions.

Diane Tavenner: Yeah. Two things coming up for me in what you’re sharing, Neerav. The first is we’ve both spent a lot of our career in the space of empowering families and parents to have choice and options and opportunities for their children. And you’re, you’re talking about teaching your own children to read and math. And so, I mean, I think it’s, it seems obvious that this is going to give more options, create more opportunity, more autonomy. But, you know, especially intersecting with a lot of the policy changes that are happening. How, how are you thinking about that? What do you think about possible? You just said there has never been a better time to create a school. But how do you think about from the family perspective, the sort of consumer, if you will, perspective what’s possible?

Neerav Kingsland: Yeah. And the thing, one of the things that’s really exciting to me, I a couple months ago had the chance to go to Rwanda and visit schools. There’s a great organization called Rising Academies that we’ve been there. Yeah. Truly spectacular. And hopefully we’ll be doing more in Rwanda and other countries in Africa and then also in India over the coming years. But AI relative to most historical education innovations I think will decrease inequality because it’s basically a cheap way to scale upgrade teaching and you know, if you have to rely on an individual human there’s obviously limits to what you can scale and there’s scarcity in that in a way there isn’t with AI. So I think big picture on the family consumer, you know, people in under-resourced schools.

This should be a boon if we can get it right. That just all makes me pretty optimistic. Yeah. And I feel way more empowered as a parent to be able to have these tools to use my kid if they were to falling behind or anything. So I think broadly it should just be if you can again get it right, avoid the cheating, get the app layer right and parents get involved. I think it should be amazing for families.

Diane Tavenner: Yeah. It makes me wonder what if people are going to start looking to schools for different things and maybe they’re already looking to schools for different things because they, they do tell us they care about the activities and the sports and the social interaction and the engagement and you know, if you’re learning to read at home and you’re, you know, do it, you have your personalized math tutor and whatnot. You know, it does sort of beg the question of what does school look like. And I think one other place we haven’t touched yet is we, I think people’s minds go to, you know, the AI being really direct to students. How is it teaching them or tutoring them or. But I think sometimes the unsexy stuff might be some of the most powerful stuff like how is it actually helping us to transform the master schedule, literally, you know, I mean which is the, how the, the bus schedule which used to dictate schools. And so do you see anything in that space, sort of the structural aspects of running big schools and systems and, and what might be possible there and how might we see that, feel that you know, in the field?

Balanced Learning with AI Tutors

Neerav Kingsland: I definitely remember the pain of bus routes for launching schools in New Orleans post Katrina. That was a gnarly bus route environment. You know, I’ll just riff a little bit, but again, I think great school entrepreneurs will build the future here. But so, you know, my daughter goes, she’s in first grade, she goes to the local public elementary school, which is wonderful, very happy with it. And I don’t begrudge them for not, you know, a year into the AI revolution or whatever, having restructured the school. But I think what I wish my daughter’s school looked like right now would be that she’d go to school. She’s six. And so maybe 60 to 90 minutes a day on screen is probably the max I’d want with AI tutors that were doing reading and math.

And like I said, with the teacher highly involved in her progression and human tutoring, augmenting it as the data’s coming out and where she’s struggling or not, and a culture that incentivizes completion, my guess is like she would be moving much faster in obviously a more individualized way, if that was structured. You could get a lot of the core content there. And then I think for the rest of the day it would be supplementing that. And then there’s some things you want a whole group discussion around the book or things like that. And so I think there’d still be a lot of room for teachers to guide learning and discussion based formats and then to the experimentation and risk taking, which is the projects, whatever they might be doing things with other kids. So some version of that where core content’s delivered in an hour or two or day, then it’s supplemented with teacher instruction and then you have more time for exploration.

Diane Tavenner: Yeah.

Michael Horn: Neerav, I’m struck by, like you’ve said it several times now, the AI tutor, the power of that. Right. And the responsiveness to an individual, particularly if you build it in with the experience and insight. Right. That good educators in learning science bring to the table to create a good scaffolded experience. I’d love to get your take on this because I feel like a lot of the skeptics of like AI tutor seems to be one of the flashpoints where you get a lot of skeptics coming out that’ll say the results aren’t nearly as good as you think. That you talked about engagement, you can solve that with the teacher, but that they sort of feel like it’s very procedural, I think would be the word that they would say, and maybe not getting at the depth of the learning. And so I’d love your take on like, what are they missing that you’re seeing about how these work fundamentally right now and where they can go?

Neerav Kingsland: Yeah. Well, to be clear, they might be right. So again, I think we just, I love your humility and all.

Michael Horn: Refreshing, by the way.

Neerav Kingsland: Yeah, well, to be clear, they might be right. So again, I think we just.

Michael Horn 

I love your humility in all this, it’s so refreshing, by the way.

Neerav Kingsland: Yeah. Which more just any vision I’m putting out, I think needs to be subject to reality based experimentation. So a lot to figure out though. I think we’ll head in this direction. Maybe another way to say it, while I think there’s never been a better time to be a school entrepreneur, plausibly my hope would be there’s never been a better time to be a teacher over the coming years. So I don’t want these tools to be dehumanizing school teaching kids. You know, my wife was a high school math teacher in New Orleans and so I’ve been not as front and center as she was, but fairly close to front and center of how hard it is to be a teacher and obviously worked with hundreds of teachers in New Orleans during my time there. And it’s an extremely demanding and grueling job.

And I think most teachers would tell you they’re not spending their time the way they want to be spending their time. And so Diane’s point, if we can get more efficiency in and then if we can get some of the more routinized part of teaching, offload it to AI, I think the teacher’s job can be a lot more creative and wonderful as well. And maybe that’s where some of the depth that plausibly could be missing right now could come from. So we got a lot of arrows in our quiver. AI is one of them. But the teacher is just going to be absolutely necessary. Obviously I would not want to send my 6 year old to a school where she’s on a screen for 10 hours a day. I’m excited to see the role of the teacher evolve as well.

And I imagine a lot of depth will come from that.

Future Potential of AI Models

Michael Horn: I am struck how you are in this very moderate position though. Right. Because we’re seeing tons of legislation right now starting to move toward getting rid of all digital screen time. And then there’s the flip side of not wanting it to be sort of the zombie apocalypse, if you will. So maybe as we wrap up, let me ask this sort of broader question. Zoom back out away from education and just the larger set of tools. Right. That you’re working on and applications.

You’re seeing all sorts of different things that Anthropic, Claude, not just you, all the other LLM foundational models. Right. Are starting to tackle and sort of, I’m curious, like what folks maybe like me and Diane, others in education are sort of discounting or don’t understand that these models are capable of doing today or is right around the corner that we may be discounting and not seeing?

Neerav Kingsland: Seeing. Yeah, it is hard. Like things are moving exponentially and our brains don’t think exponentially. One thing to do is like go play with GPT2. Like I think that was four years ago now, three years ago now. And then like go talk to, you know, GPT5 or Claude or whatever. I think visceral ways to feel how fast things are moving help you understand where we might be five years from now. Because if we make the jump like we did then for another five years.

And so I think again, Anthropic’s just trying to be vocal that we as the people who are closest to the technology do think things are happening very, very fast and there’s opportunity there, but there’s also a bunch of risk. In terms of where the models are heading. One way to think about it is an AI safety group called Meter and one of the charts they put out that I think is great is how long can a model do autonomous work, in this case encoding at like 60% accuracy, I think is their bar or something. And you know, a couple years ago it was like 30 seconds or something and I think the latest was like four to eight hours. Yeah. And so I think AI being able to do knowledge work in 24 to 48 to maybe week-long chunks over the coming years might be one way to wrap your head around it. Like I think that’s coming and that’ll be a pretty big job in technology.

Reflecting Growth Over Time

Diane Tavenner: I love this suggestion of going to play with GPT2. I don’t know if you remember, but I had the good luck of, we were in a conversation right before the big models were announced and you showed me, I guess what the early version of.

Neerav Kingsland: I remember that. Yeah, Claude in Slack was, I mean.

Diane Tavenner: I was like, I must admit, like I really didn’t get it. I was like, wait, is this like, am I just googling something? Like I don’t really understand exactly what’s happening. You certainly saw much more than I did at that moment. It took me a little bit to wrap my head around it. But I think about that moment which I remember so clearly having with you and totally not getting it and quite frankly not being terribly impressed. And now and what a. I mean it’s just so dramatic, you know, my learning curve and my arc and I’m a novice and a layperson and. And so I love this idea of can we sort of, you know, sort of set markers for ourselves where we kind of document or record what we thought or believed in that moment or how we experienced it, and then look back and reflect on those as kind of this as things progress? Because it is.

I mean, I almost feel out of breath some days. Like it goes so fast.

Neerav Kingsland: Well, you shouldn’t feel too bad as somebody who was a part of leading our series C six months later, maybe dozens of investors also were not too impressed with Anthropic at the time, but here we are.

Diane Tavenner: Well, by then I was, so maybe.

Neerav Kingsland: There you go.

Diane Tavenner: This has been awesome. Thank you so much for joining us. Before we let you go, Michael and I have a tradition of we just like to share with each other something we’ve been reading, listening to, watching. We really try to keep it outside of our day jobs, but we fail at that quite often. And so we’d love to invite you to join in that tradition. Anything, anything fun to share. Intriguing. Interesting that you.

You’ve been consuming.

Neerav Kingsland: Yeah. Two things for you. One maybe too like a little window into our world over here. The podcast everyone listens to at all the AI labs is Dwarkesh Patel. And so if you want to go deep, I’d recommend listening to that. A lot of our CEOs have been on that and a lot of the researchers, and I always learn a ton there. And then the book I’ve been reading lately is a really a wild one. It’s called Blitzed, the history of drug use in the Third Reich, which might be the best title for a book ever.

And you know, it was kind of. It’s probably fairly obvious what the book is about, but like, there was a lot of speed going on, particularly in the later years of the war. And not that that was monocausal of like the fall of the Third Reich, but it played a role. And so, you know, it’s just like an interesting aha. Of like, why did historians miss that? And like what might be going on in our own time that is non obvious. That is pushing history in one direction or another, whether it be drugs or something else. But that’s a fun read. Yeah.

Diane Tavenner: Yeah, that one’s.

Michael Horn: I was gonna say it sounds like you knew that one, Diane.

Diane Tavenner: It’s on our shelf as well. The title and the cover are very fitting, for sure.

Michael Horn: Dan, what about you? What’s been on your, what’s been on your playlist or. Or bedside table recently?

Diane Tavenner: Well, I’ve gotten pretty obsessed with a lot of what Scott Galloway is talking about, and he is on a lot of podcasts, so he talks about it all over the place. I鈥檝w really been listening to the Lost Boys podcast series, which is focused on sort of bringing light to what he would describe as a crisis among our young men in America. And there are a number of stats that suggest that these young folks are in crisis. And for me, I think I went down this path as a mom of two, sort of young, young men. And what I find is when I talk about some of the challenges or worries I have, I. There are lots of moms who come to me sort of quietly, in sort of whispered tones, and they’re feeling the same thing, experiencing the same thing, worried about the same thing. And so I do.

I think that it’s interesting and important, and I don’t know exactly what to do about it yet, but I feel compelled. So that’s where I’m spending some time.

How about you?

Michael Horn: That’s good.

Yeah. We had Richard Reeves on our Future you podcast last year around this and which was a great conversation. And Jeff Salingo is obsessed with Scott Galloway. I think it’s okay that I say that here. So those books both resonate as well. Mine, I. I finished Scott Anthony, who was an early collaborator with Clay Christensen, he wrote a book called Epic Disruptions, which is like disruptive innovation throughout history, some of which. I don’t know if I qualified them all as disruptive innovations myself, but they were all moments that changed things in pretty significant ways and sort of the establishment’s reaction or. Or struggle, if you will, to get their heads around what was coming and what.

How that would change things. And so it’s. It’s some pretty interesting flashpoints told in entertaining ways. So that’s been on my list, but we’ll wrap it there. Neerav, just huge thank you. This has been a great conversation and stretched, I think, both of our thinking. And so just thank you and for all of you listening, please, please, please keep writing in with comments, questions, lines of inquiry you want us to follow.

It’s been a real inspiration to me and Diane and directing us as we thought about the season. And so we look forward to more and we’ll see you next time on Class Disrupted.

鈥嬧婦isclosure: Neerav Kingsland serves on the board of The City Fund, which provides financial assistance to 社区黑料.

]]>
Class Disrupted Returns With More Questions About Artificial Intelligence /article/class-disrupted-returns-with-more-questions-about-artificial-intelligence/ Thu, 16 Oct 2025 16:30:00 +0000 /?post_type=article&p=1021994 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this kickoff episode for season 7, hosts Diane Tavenner and Michael Horn reconnect to unpack how artificial intelligence is shaping the education landscape. They discuss lingering skepticism about AI鈥檚 current use in schools and share their evolving feelings about the technology. The season will begin with a broad look at AI鈥檚 development, both inside and beyond education, before focusing on entrepreneurs and real-world applications that could reinvent learning.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good to see you.

Diane Tavenner: It’s really good to see you. It’s been, it’s been a summer that it’s funny to not spend as much time with you in the summer. So it’s fun to be back. And I guess that’s one of the bright spots of returning to the fall in addition to fall, which is, as you know, my favorite.

Michael Horn: It is its own magic right here in New England. This is like our best season, Diane. This is when, you know, apples, peaches, like we’re, we’re actually enjoying, you know, the, the limited harvest season compared to you people in California.

Diane Tavenner: Yes, we have, I will say we do have quite an abundant market right now, and we’ve been spending Sundays there. And I made a very delicious fig jam yesterday with the sort of end of the season figs. So it’s been a quick, fun summer and now we’re back this fall with a new season. Michael I can’t believe it’s season seven.

Michael Horn: Yeah, that’s wild in itself. I guess we crammed a couple seasons in those first year and a half or something like that because it sort of whipped by during the pandemic and no one was counting years. But we are back by popular demand to focus on AI of all things, Diane and, you know, same thing theme of this podcast, right? We’re not just interested in AI and education for its own sake, but really as a mechanism for rethinking what we have always viewed as a system built for its time, did amazing things in its time, but was never built to optimize each student’s learning and chances of really owning their future, particularly in this era. There’s a lot of change in this era right now in general in the external environment. So I think, you know, people have been saying we want more about this, follow your curiosity, and we have a lot of questions as well, right?

Diane Tavenner: We do. You know, we thought, I don’t know what we were thinking in that miniseries, that we would just sort of figure it out and move on last season, but we did not. I think we just opened more and more curiosity. And so here we are again. And, you know, I think that we have a pretty exciting lineup of guests. At least, I’m pretty excited to talk with them this year, and I think they’re going to shed a lot of light on a bunch of the questions we have. But we also just want to keep having input from our listeners. So what is on your mind? Who do you want to hear from? What do you want us to talk about?

Please send those ideas our way. I hope this season, many of you who wrote in last season are going to find that we took some of your questions, suggestions, ideas, and we’re going to try to bring those into the dialogue and conversation. And so we’re super open to that. And we’re thinking about this season a little bit 鈥 I guess we’re going to start more broadly, Michael. So open the aperture a little bit, if you will, a little bit wider. We want to, you know, we always focus on K-12 education. I think a couple of things about that this year.

One, we’re going to be really specific when we’re talking about younger people, middle school, high school, because those things seem to be playing out very differently.

Michael Horn: Yeah. And I think some pretty profound differences in last year we were pretty focused on that middle, high school segment. And yeah, I think that’s just it. Right. Like the AI that’s in your adaptive module to build additional skills very different from how you might use it in high school. Right. And so recognizing those differences, but also recognizing that higher ed and workforce have some pretty big implications on the system as well. And so we’re not going to leave those out.

Diane Tavenner: No, not at all. And from my perspective, it’s because I want to take more and more the position of what is the journey of the young person. And the young person’s journey doesn’t sort of have these stark lines between K-12 and higher ed.

Michael Horn: Those are adult systems, those are not kids systems.

Exploring AI’s Broader Landscape 

Diane Tavenner: Exactly. And so, we’ll be bringing all of those in when they make sense as we think about the journey of young people. And then, so we’re going to start that, I think, with I guess, a little bit of a what’s the landscape out there of AI? And you know, I think last season we talked about, we specifically sought out kind of cheerleaders and skeptics and we wanted to hear from those various perspectives. I think this year we’re going to look, we’re going to at least kick off the season looking at just the big picture in the landscape, like talking to some of the people who are working on the frontier models. We’re going to actually turn outside of education for an episode to look at healthcare, which is this very interesting sort of parallel universe to education, and see what we can learn from that lens and ask some big questions about, like, what is happening across the board. And we think that as a result, we, you know, a little bit of forewarning here, we might be talking to people who are a little bit more on the optimistic side as a result. But we will of course keep, you know, a critical eye on the conversations that we’re having.

Michael Horn: 100% and on the note of optimism. So just so folks can start to envision the arc, we’re not going to tell you every guest right now up front because, you know, there could be some changes. But if we’re starting broad, we’re starting with people who actually work at some of the companies that do the large language models, healthcare, as Diane said, folks that have sort of this 20,000-foot view of where AI is going more broadly. And then we’re going to start to home in on the education use cases and we’re going to go to entrepreneurs. So they’re going to certainly be optimistic as well. That’s in their nature. They see problems, they want to solve them by building something which is great and they will bring their lens. I will say, Diane, when I moved out to Silicon Valley in, What was that, 2008, 9 or something, and it feels a little bit like it did then, right when you were building the new Summit model in 2010, I guess it was.

And it feels a little bit like that. A lot of excitement, energy around edtech startups, potentially, I would say a little bit more skepticism or caution maybe is the right word from the investor class because they feel like they’ve been through this a little bit. But we’re curious to talk to a bunch of entrepreneurs and find out what are they doing with AI, what are they excited about? Is this tinkering toward utopia, as someone might have said in a book, or is this like really reinventing education in the ways that we’ve talked about? And so I think that’ll be pretty interesting as well. I’m excited to talk to all those entrepreneurs.

Setting the Season’s Baseline

Diane Tavenner: I am, too. And so what we wanted to do today, before we hop into those conversations in these next episodes is just sort of lay down our baseline foundation of where we are right now. You and I are always on a learning journey and so we always like to reflect back on like, where did we start these conversations? Where did we end? What’s changed? What’s different? And so we sort of asked ourselves these questions leading into this episode of, you know, based on where we left off three months ago, which is sort of a long time and sort of not a long time at all, you know, what’s kind of stayed the same? What do we, what do we feel like, 鈥淥h, we thought that three months ago and we kind of still think that,鈥 you know today what’s changed in our thinking, if anything, and what’s blowing our minds. Hopefully, you know, maybe something, given what you just said about the moment in time we’re in. So we thought we’d just ask each other those questions, get a level set baseline of where we’re starting the season, and then. And then we’ll get into it.

Michael Horn: That sounds good. Let’s dive into that first category. I want to hear from you. We’re coming back three months later. We put a lot of our priors on before. We also talked about our own evolution in the end of the last two episodes of last season. If people want to go back and see how we have remained true to our roots of trying to be malleable and keep learning, I’m curious what stayed the same in your mind that has not changed from where we left off?

Diane Tavenner: Yeah, it’s funny because we decided on these questions, and then as I started thinking about them, I was like, oh, maybe I want to shift them to what is still 鈥

Michael Horn: All right, go where you want to go.

Diane Tavenner: Still disappointing me a little bit, but it’s stayed the same. So that’s where the disappointment comes from. I think that, I feel like I spend a lot of time really, like, digging and poking to understand what’s underneath all the, in some cases, hyperbole around AI and, like, what’s actually happening and what’s really going on in there. And so, despite all sort of the energy and the talk and the, you know, everything that’s happening, I find when I’m digging that mostly AI still in education, not still, it is currently being used kind of at the individual level, if you will, whether it be the individual teacher or maybe sort of a little bit of an interface with the students. And I would contrast that to it being used, you know, more broadly around the system or how we actually do schools. And I think a lot of that usage is still in the efficiency category. So how can we gain more efficiency in things we’re doing? And there still seems to be a very significant focus on chatbots.

And I have been a skeptic of the ultimate utility of chatbots since the very beginning. As you know, I don’t think that they’re the manifestation of AI that I’m excited about and hopeful for and whatnot. And so I guess that’s a little bit of my, that’s definitely stayed the same. I haven’t changed. My opinion hasn’t changed.

Michael Horn: And just to make sure people understand what you’re saying when you say at the level of the individual, you mean interactions within the existing models to make them a little bit more efficient, but not actually fundamentally change what those interactions or assumptions or baseline processes are in the system? Is that what you’re saying?

Diane Tavenner: Yeah, it’s kind of like we’re using AI to maybe do things we’ve always done, maybe just a little bit faster, a little bit easier, a little bit better, you know, with fewer humans potentially. And so, and we’ll get in and we’ll talk with folks from these places, but I think about, you know, the big announcements over the summer from, you know, Gemini and OpenAI about study mode and, and the variety of products from, from Google, DeepMind, Gemini, and they still feel to me like, yeah, that’s kind of the way we’ve always done it in school. It’s just like, are we making it a little bit fancier, personalized, easier, better?

EdTech’s Need for Reinvention

Michael Horn: Yeah, I think I’m in the same place, Diane. Actually, we, and just the audience knows we didn’t talk about our answers in advance because we were trying to surprise each other, but we may have failed on the first one anyway, so I wrote a piece over the summer saying, like AI, EdTech, it’s going to continue to disappoint as long as we’re layering it over existing models as opposed to reinventing the model itself. It’s frankly the central premise of disrupting class that I think, frankly, the majority of ed tech entrepreneurs got wrong in the 2010s. I think because I was in it a little bit, I was trying to learn and be curious at that time. I kind of feel like I’m just going to be a little bit more blunt this time and say if you think you’re serving the existing system and the existing classrooms in the existing schools, you’re not going to reinvent and you’re not going to get us to where we need to go. Full stop. Full stop. And so I think the new models, yeah, I know I’m a little bit more annoyed at this point.

Right. But like the new models, I think the entrepreneurial energy from outside of the system, truly, I think is even more important than it’s, than it’s been. So I think I’ve stayed the same on that and maybe even gotten a little bit more passionate about it. But let’s shift to where our minds have changed, next, what is different in your mind?

Diane Tavenner: Well, it’s interesting. I think I’m going to build on where you’re going because the thing that was coming to me was how I’m feeling is shifting. And, you know, I live in Silicon Valley, so, you know, I’m very sort of shape. It is. It is the dominant conversation here. It’s everywhere you go.

The AI Gold Rush

Diane Tavenner: It’s what everyone’s thinking about. It’s just in the culture and the water. And it feels like over the summer it’s shifting from sort of this amazement and awe and wonder and curiosity of like this new incredible thing to a little bit of like a gold rush. Like people have realized, not that they didn’t before, but really realized there’s so much money to be made in AI. If you’re tracking at all the valuations of companies and the funding, you know, the venture funding going to startups and things like that, not necessarily in education, but in other sectors, there’s so much money. And that just sort of adds an element that is far less about curiosity, wonder, awe, possibility and much more about, I just feel like the sharp elbows start to come out and there’s a level of aggressiveness.

There’s people who hop into it who I don’t think or care very much about transforming systems for the better, but it’s really about who’s going to dominate, who’s going to be in power, who’s going to make money, who’s going to, you know, and there’s just the, it just feels a little. It’s inevitable and it just doesn’t feel as kind of, I don’t know,

Michael Horn: Noble. Yeah. Is that what it came up?

Diane Tavenner: No, I think that. That’s right. It’s fascinating.

Michael Horn: Well, maybe the caveat I could say is like, there’s nothing wrong with people making money off of it. We just, I think we both believe that the bigger opportunities for good are not where the money is at the moment, at least in education. Right. Like, the dominant spend is still in the existing system, which is why. I get it. It’s why people sell into the existing system. I just think we just really shortchange the longer transformation opportunities, as well.

Diane Tavenner: I think that’s exactly right. It feels like there’s two totally different dialogues going on and, you know, neither is aware of each other a little bit. And so, yeah, it’s just, it’s interesting. It’s an interesting cultural time. There’s a big, you know, sort of gathering next week of the, you know, AI enterprise stack gathering. And so we’re just going to start to see a whole bunch of, yeah, that kind of focus, I think.

Michael Horn: Yeah, that’s interesting. I will say the sums of money and I’m not talking about education at the moment, I’m talking more generally that these startups are raising. It feels very .com era. Right. It’s been staggering from my perspective to, to watch it and I’m, I’m very removed from the Silicon Valley, you know, I’m a decade or whatever out from having been in those waters. But it feels a lot like those times. It always reminds me, Clay always said, you know, like when you’re truly disrupting, you’re going after non consumption. And by.

By definition that means there’s no market at the beginning. So like it’s really hard to chase nothing and pitch that. So I think some of that is also going on. I, I’ll say for me, I don’t know if it’s a letdown is the right word because I’ll contradict myself, like, I think in our next question, but like, I’ve become less impressed with the power of these models professionally. And yeah, I, and I don’t know. And I think a large part of it is like they are prediction machines at the end of the day. Like they are not logical. Right.

Like at the end of the day, they’re not people. They’re, they are absorbing a fraction of the senses we use to think about them, and perceive the world. Right. Largely language and, you know, some image. Right. But it’s basically eyesight and prediction machines. Yes. Like, you know, some of the thinking that you can do by having it recurse on itself, it’s pretty cool.

And I just, it feels like for an, like when you’re an expert in the field, the things that it does for you just feel sort of generic to me, Diane and maybe it’s how I’m using it, but I, because of that lack of logic, I’ve, I’ve become a little bit more or tempered. I think maybe that’s the right word about some of these models. So it could be me.

Diane Tavenner: But no, let me double click on that because I feel like I might be having a similar experience. So tell me if this rings true for you. I find myself, I think I’ve gotten a lot better at prompts, prompting, and to get what I want pretty darn quickly. And so I actually really use it as an assistant kind of all day, every day, you know, quite efficiently, much more efficiently than in the beginning, and I think in a much more kind of fluid way. And to your point, it feels very much like, like an assistant, you know, again, like not a lot of like this is kind of magical anymore. Is that what you’re talking about or.

Michael Horn: I think that is exactly what it is. Right. And sometimes I conclude just like I would with a research assistant. Oh man, the.amount of times we’re gonna have to go back and forth in this, it’s not worth it. I’m just gonna do it myself. And so I’m. I find myself making that calculation much more.

Personal Learning and Inquiry

Michael Horn: Maybe this will blow your mind, which is I will say, in my personal life, I actually find the utility to have gone up tremendously because I’m not an expert in a lot of those questions that I bring to bear. And it allows me to ask the naive question that I’m not always great at finding the person I should ask to. And the chat mode with GPT5 like and, showing a video and, and like having conversations about stuff. I find it incredible for personal learning and just sort of general questions there. I find it incredibly valuable at bringing up like a couple hunches or, or disproving things that I might be thinking and so forth and just like, okay, I’m coming in with a much higher baseline now than I was before that I found really, really compelling on the personal side.

The professional side may be a little bit less so. What about you?

Diane Tavenner: That’s interesting. Well, that feels very, very true to me. I mean, I’m using it for everything from like, how to care for my plants to how to curate a playlist for our family dinner nights in the summer and you know, talk to my kids and they’ll tell you how much of a fail or success that was. But you know, it is, it is like I don’t feel ashamed to ask it dumb questions. And I. Every time we watch a movie, I’m like, I feel like I’m having an, you know, an analysis with it afterward. I chalk that all up too.

It is significantly better than search, I think. Like it’s this big leap forward better than search. Let’s see. For me, I’m super interested in and curious about these what feel like, I’m sure they’re not, but feel like sort of overnight upending of practices in other sectors and fields that I’m aware of. And it does feel like there’s some structural change happening in other fields, which is. Makes me a little bit envious, I’m wanting that in education.

And I’ll just give you a couple of examples. Like one just as simple as like online retail. I mean so many people I know in this space, you know, it is fundamentally changing the space because you don’t need to have. This sounds silly but like you don’t need to have models trying on clothes or modeling your, you know, your wares that you’re selling because literally you can just do that using AI. So you take pictures and like consumers can do that now literally I can go try things on myself, self online, you know, and see what it will look like on me now. Again, that sounds trite, but it does feel like it’s going to revolutionize this kind of industry in many, many ways and then kind of on the more serious end and this is why I’m excited to have a healthcare conversation. There’s just such phenomenal opportunities that I see happening in healthcare that are really profound and I think are going to fundamentally change the system.

I’m, I feel like it’s, well, it’s.

Michael Horn: Going to be the same analogy I think though in education, right. Because I see a ton of AI that is improving the exist, sustaining the existing system and making it better, more efficacious, efficient. And then I see some AI outside the system, right. Like more direct to patient, very different value network. And that stuff depending on, you know, if we let it,

Diane Tavenner: It’s really interesting, so I think those are the places where I feel like myself saying wow. Like wow. And a little bit of mind blowing.

Michael Horn: Well, I’m excited to learn a lot more in the season. As we said, it’s going to be a really interesting group of guests. We’re, and like you said at the beginning, you know, last time we purposefully had optimists and pessimists up there. So we could really put the different arguments against each other and think about this. We’ll take a very different line of inquiry. It’s safe to say with each guest we have based on how they’re coming in and what we’re hoping to learn from them. I cannot wait. But before we close out this welcome back primer if you will, of an episode, let’s go to our segment that some people keep track of, which is what are you reading, watching, listening, etc.

Outside of work stuff ideally, although sometimes we fail and slip into work. But what is yours?

Diane Tavenner: I might be sort of failing right out of the gate, but. So I read this book recently. I think it’s been around for a bit, so it might not be new for a bunch of people. It’s called How Big Things Get Big Done and the taglines very long, surprising factors that determine the fate of every project, from home renovations to space exploration and everything in between. That’s by Bent Flyvbjerg and Dan Gardner. And first of all, I will be honest, it stressed me out to read this book because like every t feels like every big project is fraught. There’s so much potential. The likelihood you’re going to be successful is really slim.

So it did give me a little bit of stress, especially as I thought about projects that I’m currently working on and was evaluating through their framework. But one of the things I loved about it was how data driven it is. And it really looks at these big data sets to take a critical lens at how we do projects and how we could do them better. And I think this might be a theme that comes up a lot this season is the power and importance of data. I think that that doesn’t get talked about nearly enough and really might be the most important thing that we’re grappling with here. So if you do any sort of projects.

Michael Horn: I should read this. You’re telling me. All right, all right, now you’re scaring me up. But I remember when the book came out, I have not read.

Diane Tavenner: Yeah, they’ll look back on your home renovation and be like.

Michael Horn: I was about to say, is this gonna make me, like, feel really dumb on a bunch of things. Okay, yes. The answer is yes. So, yeah, so. But I’ll learn. All right. Well, mine is gonna be work based as well, by the way. It’s interesting.

Like, we haven’t seen each other in three months, so we’ve watched and read a lot of things. So we’re sort of picking. Right. I was, I had all these TV series that I actually watched. I was gonna be like, hey, look what I did, Diane. But, I think I should give a shout out to my other co host of a podcast. Jeff Salingo’s new book, Dream School: Finding the College that’s Right for you came out Sept. 9.

So the day before we’re recording this, but it’ll be a few weeks out by the time this comes out. And it’s a fun book that tries to get people away from thinking that you just have to go to the selectives and take a wider aperture and give you some criteria to do so as you go through that journey. So we’ll see. We’ll see. But I enjoyed reading it and I’ll put that on my list.

Diane Tavenner: Awesome. Well, I’m excited for that one I think, you know, with my new project Future we are attempting to do that as well. And when we.

Michael Horn: Yeah, I saw that in the feature set you have, you have that little part where like it depending on your pathway. Once you pick, if you go in the four year college pathway, it starts to suggest some schools that might be better fits based on both outcomes, if I understood it. But also based on the things that you seem to be gravitating towards.

Diane Tavenner: Yeah. And not the usual suspect schools, but schools that based on data are performing are better access and better outcomes for young people. So I look forward to it.

Michael Horn: Check out the, check out the appendix. He built a cool little, I think he put in the appendix because he didn’t want another list out there. But there’s a list. But it’s. What’s more interesting is the criteria that he chose to come up with schools that you might want to look at. So it’s interesting. That’s the plug. But we’re excited to dive into this season.

I think we’re going to learn a lot. Can’t wait to be on the journey with you. And as Diane said up front, tell us what you want to hear. Tell us what you want to ask people. We will try to start teasing some guests ahead of time perhaps so you can be ready to ask us or tell us what you want us to ask. And we can’t wait to get into it with you all on season seven of Class Disrupted. We’ll see you next time.

]]>
Podcast: Key Lessons from New Orleans鈥 Post-Katrina Education Experiment /article/podcast-20-years-after-katrina-closed-schools-assessing-the-victories-challenges-and-enduring-lessons-of-new-orleans-education-experiment/ Tue, 09 Sep 2025 18:30:00 +0000 /?post_type=article&p=1020496 社区黑料 is partnering with The Branch in promoting , a limited-run podcast series that revisits the sweeping changes to New Orleans’ public schools after Hurricane Katrina came ashore 20 years ago last month. Listen to the final episode below and .

Two decades after Hurricane Katrina, the legacy of New Orleans鈥 radical education experiment is still contested. Was it a success? The final episode of Where the Schools Went grapples with this question head on.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Doug Harris, chair of Tulane University鈥檚 Department of Economics and founding director of the Education Research Alliance for New Orleans, has led the team studying the city鈥檚 schools for years. Their findings show both real progress and persistent gaps: higher graduation rates, more students going to college, stronger test scores, but uneven results and questions about whether the momentum can last. 

We talk with Doug about how to make sense of this data and what lessons other cities might take from it:

But of course, data can only go so far. In the second half of this episode, we return to voices you鈥檝e heard from throughout Where the Schools Went to test those findings. 

Chris Stewart reflects on how New Orleans became the center of a national fight over education policy, with critics and champions battling on social media and in statehouses over whether the 鈥渟ystem of schools鈥 model would spread. 

Former principal and school founder Alexina Medley, who led a school both before and after Katrina, describes her pride in how far the city has come, but also cautions that the impact of COVID means it now faces a new crossroads. 

Dana Peterson, CEO of New Schools for New Orleans, calls accountability the city鈥檚 greatest legacy while cautioning that progress should not be mistaken for success. 

And John White, the former state superintendent, argues that the deepest lesson is about the importance of coherence and its ability to empower educators, hold them to clear standards, and resource schools fairly.

Finally, I share some of my own reflections. As a veteran of the education wars who left school leadership burned out, I found that reporting for this series helped me to reconnect with the purpose of schools and the people who run them. This story, and the city of New Orleans more broadly, offers a lesson not only in how to build better schools, but also in how to practice a better kind of politics.

Listen to the final episode above. 

Where the Schools Went is a five-part podcast series from The Branch, produced in partnership with 社区黑料 and MeidasTouch. Listen at or .

]]>
Artificial Intelligence in Education: Risks, Opportunities and What鈥檚 Next /article/artificial-intelligence-in-education-risks-opportunities-and-whats-next/ Wed, 11 Jun 2025 16:30:00 +0000 /?post_type=article&p=1016782 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In the last episode of the season, Michael Horn and Diane Tavenner come together, in person, to reflect on the arc of their artificial intelligence-focused series. They discuss key themes and takeaways, including the enduring importance of foundational knowledge, skepticism around the speed and impact of AI-driven change within traditional schools, and how transformative innovation is more likely to emerge from new educational models. Their conversation explores the challenges and opportunities AI brings 鈥 particularly in developing curiosity as a critical habit for learners 鈥 and revisits how their own perspectives shifted throughout the season. 

Listen to the episode below. A full transcript follows.

Michael Horn: Hey, Diane, it is good to be with you in person.

Diane Tavenner: It’s really good to be in person. It’s a little funny where we are in person, but it’s kind of the perfect setting to end our A.I. you know, miniseries season six. We are at the air show. I think that’s what it’s called, the AI show in San Diego.

Michael Horn: I’m gonna take a selfie, as we say.

Diane Tavenner: We’re gonna send you a picture of this. So we’re. We’re recording here from the floor that is filled with educators and edtech companies and AI. AI. AI!

AI’s Educational Impact Outside Schools

Michael Horn: Because AI is the thing, which is perfect because our season this year has almost exclusively focused on the question of what will the impact of AI be in education? How do we shape that? What do we want it to be? All these questions, frankly, in ways that neither of us had imagined fully. I think when we started this and we did a first sort of rapid reaction.

Diane Tavenner: We did. Were we starting our kind of baseline assessment of what we thought and our knowledge and what we were curious about?

Michael Horn: Yep. And we’ve gone through this journey, and now today, we sort of get to tidy it up with our very sharp, insightful takes. No pressure on us.

Diane Tavenner: No pressure for those key headlines. But, you know, along the way, we interviewed a bunch of really interesting people, some skeptics, some really positive folks. And we benefited a lot from it.

Michael Horn: I learned a ton. My understanding of the space. I don’t know if I conveyed it on our prior episode, but I think it’s a lot deeper than it was when we started.

Diane Tavenner: For me, too. I really appreciate them. And then, you know, in true fashion, we just publicly processed out loud last episode.

Michael Horn: We do.

Diane Tavenner: And now we’re going to try to actually pull it together with some key takeaways. So that’s how we’re going to wrap it today. And so we kind of outlined, you know, three big categories here. And the first one is, I want to ask you what belief was confirmed for you as we made our way through this season?

Michael Horn: Yeah. So people obviously heard where we started, but I will confess, I’ve been struggling. I knew you were going to ask this question, and for days I’ve been wondering, what did it confirm for me? I think I will say two things. If that. And maybe that’s cheating. But it’s our podcast. Right. So, number one, I think it confirmed for me that foundational knowledge will still be important.

Diane Tavenner: Yes.

Michael Horn: And I think developing it into skills will still be important, just as Google did not change that reality, despite what a lot of educators and maybe more schools of education sadly were telling their students that became teachers. I don’t think AI will change that either. We had a long conversation in the last episode around the nature of expertise and who AI is useful for. I think the second thing that maybe hit harder for me but, but confirmed something that we talked about in the first episode was I think the most transformational use cases of AI in education will be in areas outside of the traditional schools with new models that leverage AI that wrap around it to do things very differently from business as usual, frankly. Like why you started public school is outside of the traditional. Right. I think the other piece of that is I’m somewhat skeptical that venture capital will be the thing that funds a lot of these new models that emerge.

Diane Tavenner: Say more about that. Why?

Michael Horn 

Well, I could be very wrong in the latter. I’m just coming, we’re at this conference and I just coming from a place where a few people said no, we are funding these things. So I could be completely wrong. I guess my thoughts are that the time frames for explosive growth for VC are short; five to seven years.

Diane Tavenner: Yeah.

Michael Horn: The micro schools, the new emerging schooling models. I don’t even know if micro schools will be the word we use in five years from now. I’m not convinced those are like zero to a hundred thousand student businesses.

Diane Tavenner: Yeah.

Michael Horn: And so I don’t know, can you make a venture style business out of them? Venture might be funding the AI software that sort of makes those things go round and certainly the infrastructure that we’ve talked about.

Diane Tavenner: Right, right.

Michael Horn: But I, but I guess I think that’s going to be the really interesting hotbed of activity to look at. And we had this dichotomy on the first show, teacher facing versus student facing. I think that’s less present in my mind at the moment. But the student facing stuff I think will be in these new models, not the traditional ones.

Diane Tavenner: Fascinating.

Michael Horn: What about you?

Diane Tavenner: Well, I think that, you know, when.

Michael Horn: I feel free to disagree with me also I think.

Skeptical Optimism on Change

Diane Tavenner: Well, I think my confirmed belief is sort of a dimension of what you’re talking about, maybe the flip side of what you’re talking about or connected to it and I can’t decide if it’s in conflict with what you’re saying or not. So let me just put it out there and we’ll see. I will say that I think of myself almost as always an optimist, but I am a skeptic in one area and I believed coming into this that we weren’t going to hear that schools were being redesigned or that even had been. And so it sort of confirmed my belief that I don’t know what is going to bring about this kind of change. And so you are saying it’s going to happen outside of the. Yes, because that’s the only place that.

Michael Horn: It’s the only place for transformational use cases.

Diane Tavenner: And it may be yet.

Michael Horn: And it may be yet. And I think the confirmed belief for me at the moment, it’s great when you’re wrong and you learn something new. I will say. But at the moment, it confirmed my sense that it will, look at our field, they tend to be consumed with the hardest, most intractable problems at the center of the field. And this is gonna be the periphery. It’s not gonna be the bulk of it. So there’s a little bit of a cognitive dissonance if you.

Diane Tavenner: I think you’re right. And it’s. It’s so interesting. The story in America is truancy and absenteeism. So data tells a story along that. But if you’re processing that, that is the biggest problem. And then you’re creating, using AI to create a solution structure.

And what is happening in the school day is the problem. Families are voting with their feet.

Michael Horn: So it’s so interesting you say that. I’m rereading Bob Moesta’s book, Five Skills of Innovators. I almost mailed you a copy over the weekend. They’re solving a problem rather than asking, what is the system supposed to do and how do you tighten the variance around that? And as he says, you can solve the problem, but create five others. Or you say, what is the system supposed to do now? Yeah. And so that’s why I think we got to bust out. So let me ask you, Let me ask you the next question. Where did it change your mind or beliefs? Anything that we learned?

Diane Tavenner: Well, I do. I do think it changed my mind. And I’ll point to our episode with John Bailey. That’s how we kicked off this series. And I think I’ve talked to so many people who love that episode, and they’re like, oh, my gosh, I had no idea all the different ways that I could use ChatGPT or Claude or whatever AI I’m using. And it’s true. I mean, John, you know, talked about how we now have an expert in our pocket on every possible topic. And so it really pushed me to think about how I was using it in my life, both in.

In my personal life, in my professional life, and in our product. Now there’s Some challenges with this expert idea that I think came up for both of us.

Michael Horn: Yeah. And maybe that’s where I, maybe that’s where it changed my beliefs. I think I had a sense and you can read my quotes in newspapers and stuff like that. That or newspapers exist. Ed weeks, stuff like that. That. I think this series really gave me a much deeper set of questions around what kinds of students will actually be able to take advantage of these types of tools. I won’t go into it again. Did it the last episode around this novice expert, unknowing, knowing, sort of two by two.

Revising Views on AI Strategy

Michael Horn: And so I think that’s like something that I’m really wrestling and revising in my head coming out of this. I think along those lines, it gave me a much deeper concern over a lot of the things that could go wrong if we’re not super intentional and thoughtful about that game. But I think it’s like how we leaned into it. And I, I will say, I don’t know if this is a revision for me. You may tell me I’m leaving my principles behind, but I sort of scoffed a couple years ago when districts would say, we need an AI strategy. And I was like, no, that’s focusing on the inputs, not the outcomes you want. But I think I’ve revised my stance in that I do think that there needs to be more thoughtfulness around what are our beliefs and values and so forth in an era of AI, and what does that mean for what we think about teaching and learning? And maybe that’s your AI strategy.

Diane Tavenner: Well, and this harkens back to the episode with Rebecca Winthorp. Will AI provoke schools to go back and have the real conversations about what is the purpose of education? What are we trying to do? What matters now? How are we using this new, very powerful tool to further our purpose?

Michael Horn: Look, I would hope that they would, but, I mean, I think this is the answer, you know, see number one, where I think it’s more likely that these conversations happen in embryonic education communities than the traditional, despite how broken this could look in five years if we go down this road. But that’s, I left with a lot of concerns.

Diane Tavenner: Yeah. And I’m curious in my own use of AI, if I’m missing out or losing anything, because I’m not, like, processing some of my thinking and work in the way that I used to, like, no doubt more efficient, certain brain work during that process.

Michael Horn: So was it creating cognitive laziness that.

Diane Tavenner: I have no evidence that that’s true. But I do wonder.

Michael Horn: And on my other podcast, , Jeff Salingo talked about how his daughter, one of his daughters, asked what you did when you didn’t have phones. And her visual image wasn’t like, oh, you memorized stuff and had to learn a lot. Her visual image was literally like, we have a phone in front of us, navigating us. We must have had a large fold out map. She couldn’t imagine that we would write down the directions and so forth and then. And occasionally you pulled over and had to recalibrate, but. And so he was like, oh, so this is an example of cognitive laziness. And I was like, I actually think that’s an example of freeing up the brain to do other things that I think is.

Curiosity’s Impact on Longevity

Diane Tavenner: Well, and in a whole other part of our lives. We both care a lot about longevity and the science and whatnot. And so there’s certainly some evidence over there that we are not helping our brains when we’re taking all those tasks out of our life. So I want to switch gears and name something else that it changed for me, and that’s curiosity. I think we both came to this. And for me, here was the big aha, like I have for years. Like, I built the summit model with the habits of success, and curiosity was one of the parts of that. But curiosity has always gotten sort of shortchanged, if you will, because everyone’s like, well, that’s great, but how do you teach it and how do you assess it? And it’s sort of sitting up there and to me, like, curiosity comes roaring back in.

It is having its shining moment.

Michael Horn: Like the habit.

Diane Tavenner: Yes.

Michael Horn: That you will need to be a thriving adult in this world. So you don’t take things on face value. So you are inquisitive, so you ask. So you’re always needing to use this, I think, to figure out what is truth, if you will. That’s perhaps a real skill that we will need to be better at developing.

Diane Tavenner: You know, I would probably call it more of a habit, but it is a skill. It’s one of those weird ones because I feel like we’re born naturally curious, not feel like there’s a lot of evidence of that. I sadly believe that our education system actually rings that curiosity out of us.

Michael Horn: It doesn’t reward it. Right?

Diane Tavenner: It doesn’t reward it. And you know what’s interesting? In my current work, you ask employers, you know, who would you provide job shadow opportunities for, who would you have as an intern, those sorts of things. And when you talk to them, curiosity rises to the top. What do they want? A young person who comes in, who’s a signal that you do have a growth mindset and you are interested in growing and you do want to learn and you’re just. Yeah, it’s just such an important quality, I think.

Michael Horn: Yeah, I think that’s right. And it. And it connects all these things. My own worry is that if people don’t have enough foundational knowledge, they’ll actually be far less creative in this world of AI where they’re just doing what is sort of told to them and unable to ask big questions. If I ask you to learn how to ask really big questions that break out of status quo systems and things of that nature.

Diane Tavenner: Exactly to that point. I think the other thing that I’ve been thinking differently about is throughout this series, as you know, my biological son is a history guy.

Michael Horn: Someone after my heart, I know, said.

Diane Tavenner: To me, the other one is obsessed with AI, so it’s an interesting combo.

Michael Horn: But yeah, the other one I have no chance of understanding.

Human Element in Innovation

Diane Tavenner: But yes, yeah, she said to me, you know, mom, because we’re talking about the speed of how the development of the innovation, but the human part is still really real. And so one of the things he said to me is, you know, do you know how long it took for America to fully adopt electricity after it was invented?

Michael Horn: It was like rebuilding of models around it that are native to that at the center.

Diane Tavenner: Yes. And I just think it’s so interesting. Like I had a conversation with ChatGPT about why did it take so long. And so some of the things I learned and my kiddo is like, there’s infrastructure. In the case of electricity, there was a cost. I would argue there’s like hidden costs to it.

Michael Horn: I think there’s huge costs. This is not the zero marginal cost world anymore of Silicon Valley.

Diane Tavenner: Right, right.

Michael Horn: It’s different.

Diane Tavenner: Right. There was a lack of immediate need or use. Why are you getting on AI like, and even the two of us saying, you know, we now almost never go on Google and search Google anymore because we’ve transformed our behavior over. But it took a minute even for us to sort of figure that out, change our behavior.

Michael Horn: Interesting. So this guy Horace Dediu, I was not going to go here until you just brought this up. Who runs the Asymco sort of community podcast, speaks a lot about Apple. He was with the Christensen Institute for a hot minute.

Diane Tavenner: OK.

Michael Horn: And he was doing his research around the adoption of refrigerators and dryers. Adoption of refrigerators was relatively fast, but the adoption of dryers was really, really slow. Oh, and dryers were really, really slow adoption because you had to change the component into which it fit in the house. Right.

Diane Tavenner: And so it requires a different plug.

Michael Horn: Infrastructure. Tells you how fast it will go.

Diane Tavenner: Yeah.

Michael Horn: And we don’t ever ask, have that conversation right around thinking about, you know, how much do you have to redesign huge parts to make really it useful.

Diane Tavenner: And I would assume the case with dryers to households across the country. And I. I think that when people look back on this moment in history, they’ll probably blur the time period it takes. But we’re going to live through, I think, a much longer time period.

Michael Horn: It’s interesting, a lot of my early funders at the Christensen Institute, people like Gis猫le Huff, who I adore, they would get annoyed with me. I mean, when I said patience is going to be required because we have an install base, we have a system.

Diane Tavenner: Right.

Michael Horn: I, on the last one, expressed my belief that some of these dynamics could change around disruptive innovation actually now being welcomed for the first time.

Diane Tavenner: So I’m laughing at us a little.

Michael Horn: Bit because of our naivete.

Diane Tavenner: 2020 to do a little. Well, back in 2020, but then we thought we were going to do a little AI miniseries and then we’d figure it all out. But I think that as we wrap this season, season six, we actually have even more questions and curiosity ourselves.

Michael Horn: Well, and we’d love to hear from folks who are tuning in. This is a welcome invitation to just pester us less with your pitches and more with, like, what are you curious about?

Diane Tavenner: Yes.

Michael Horn: Who would you like to hear from? Not in your orbit, but, you know, people that would further both your understanding and ours.

Diane Tavenner: Yes. And what are you doing and what are you seeing and how can we sort of come along on this journey together?

Michael Horn: So let me end with this one question. Will AI have an impact on young people? If so, when and how?

Diane Tavenner: Yes.

Michael Horn: My answer to that question is like, despite what at least one of our guests said is, I can’t imagine it will not have a big impact on individuals. I think AI is going to be much more pervasive, in fact. And look, I’m not one of those people that says just because it’s in the working world, they need to use it now because we’re preparing them for that world.

Diane Tavenner: It’s already impacting them. So it is having an influence on the work that’s available to them. The way employers think about work. The what, what. Where it’s going to have an impact on.

High School: Experiential Learning Shift

Michael Horn: Particularly in high school, I think it’s going to be like the old world of like, here’s the curriculum. Go learn. It, I think, is massively thrown out the window. Right. Like, Maybe K through 8th is a little bit more constant because it is foundational. I, I don’t think it should change as much, but high school, I think, is different. It already should be much more experiential and exploratory in my view. But I, I think it’ll be, I think it should be extremely so now.

All right, let’s wrap. What are you reading, watching, listening to that I should be clued into.

Diane Tavenner: Well, I’m still on all of the ancient Greek fun, so I have gotten a lot of very polarized reactions to this, but hear me out. So Gavin Newsom has a new podcast.

Michael Horn: He does.

Diane Tavenner: I’ve been reading about it and lots of people have been reading about it. I live in California, as you know.

Michael Horn: So he’s your Governor.

Diane Tavenner: He is my governor. You have to listen to this. The first episode where he interviewed Charlie Kirk. And for those who don’t know, the premise is he’s talking to people who he really disagrees with. Here’s why I’m going to promote it. I love it. These are, they’re getting into the nuance of policy and how things work. And I am learning a lot and I want to be able to make my own decisions.

Diane Tavenner: So I want to hear the full scope of things and feel like. And I don’t. So this is the kind of conversation I want to exist out there.

Michael Horn: Well, so you’re learning from that and I’m learning from you. I, I am, I’m, I’m not just reading non fiction. I’ve also been embracing some fiction books. I’ll name one. Yeah, there you go. Right. I’ll name one which is Paradise. And I’m gonna mess up the author’s name.

Michael Horn: I’m gonna apologize, but Abdulrazak Gurnah. And I’m reading this book Paradise, because I’m, I’m learning from you that it’s nice to read fiction from the country where you’re about to travel. And as you know, I’m headed to Tanzania with Imagine Worldwide. I’m on the board there.

Diane Tavenner: Are you enjoying it?

Michael Horn: I’m still trying to make sense from it.

Diane Tavenne: Yeah.

Michael Horn: It’s less. The fiction that I read around Sierra Leone in particular was like very of the Civil War moment and like I could really figure out where that is. But in Paradise, there are a lot of currents going on in this book. I’m trying to sense make. And it’s really interesting.

Diane Tavenner: How beautiful.

Michael Horn: And thank you to all of our listeners once again. And thank you, of course, to the 74 for distributing this. And it’s how so many of our listeners connect with us. And so to all of you, we will see you next season on Class Disrupted.

]]>
Podcast: Processing AI in Education Out Loud /article/podcast-processing-ai-in-education-out-loud/ Fri, 16 May 2025 16:30:00 +0000 /?post_type=article&p=1015727 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode of their miniseries on artificial intelligence in education, Diane Tavenner and Michael Horn reflect on what they’ve learned. They discuss how AI offers unprecedented access to expertise, but also highlight concerns about its effectiveness for young learners. Throughout, Diane and Michael grapple with skepticism, optimism and the practical challenges of embedding AI in educational systems, while looking ahead to what meaningful, student-centered innovation could look like.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good to see you.

Diane Tavenner: It’s good to see you, too. I’m excited to see you in person in the coming week. But, you know, and, and maybe we’ll just jump right in because I think people know we’ve been doing this miniseries about AI and I’m going to jump in because I’m very excited for this conversation today. We have been talking to all these folks in this little miniseries, and you’ve been doing a better job than I have of sort of just listening to them and letting them talk and not, you know, sort of interjecting your opinions and feelings. You know, that’s a little bit hard for me. But today is our day where we get to do that. And so for our listeners, you know, sometimes you, you write to us or you tell us that you like kind of overhearing us talk to each other. And so this is that episode.

We have not talked to each other about all that we learned and discovered from these conversations. And so we’re calling this kind of our out loud processing episode where we’re going to go through and just process through. What do we hear? What do people say? What are, what were we thinking? What’s our takeaways? And then we’ll come back one more time and organize that and put that into kind of a final season episode and a final miniseries episode where we’ll pull out the big headlines and the takeaways. But today it’s going to be pretty raw conversation.

Michael Horn: Today is going to be raw, up close up, personal, all of the different demons in our heads. And if we miss something, send us a line, tell us what we missed. Because we have looked back on some of these conversations. Some of these conversations, I suspect, Diane, it’s going to be more like, wow, I, this, this one thing has been really burning with me and I had to address that, but I sort of forgot some of the other points. So don’t be shy about pointing that out to us. We, we really have enjoyed, I think it’s fair to say, these conversations because it’s really opened up to a lot of different perspectives. And I think there have been elements of truth or insight in every single conversation. Even where we disagreed on certain elements or whatever else to me, it’s just sort of like revealing the whole elephant, if you will.

Diane Tavenner: I couldn’t agree more. That’s probably a really important place to start, is with some gratitude. Thank you so much for the people who came to talk with us and share with us. It’s been incredibly valuable for us. What we seem to be hearing is it’s valuable for other folks. And so let’s dive in. I’m, I’m going to take sort of that 鈥 I know for a fact that both of us, this has stuck with us.

It was from our first conversation with John Bailey and I think it was, for me, it was such an eye opening conversation. The idea that John is so clear that what AI provides is an expert on every subject in every person’s pocket. And this idea that we now have this expertise just literally at our fingertips. And not only did he say that kind of at a high level, but I think what was amazing about what John did was he literally gave us these concrete examples of how he’s using AI as an expert in his life. And they were like so many different examples. And then I of course went on to like to look at more of them and read more of them. And you know, he’s probably the most creative person I’ve talked to about how he’s really using AI as an expert. And so that one’s just sitting with me.

I don’t. And I’ve heard that from other people too. How did that one strike you, Michael?

Michael Horn: Yeah, so similar thing, Diane, which is, I think you take away, OK. Google gave us access to the world’s knowledge. This gives us access to expertise. I think that’s like a really interesting distinction. It lands for me. I think frankly, the thing I took away, or one of the things I took away from the episode we had with Siya from, from OpenAI was less the views on education and more how she actually uses the tool herself as this personal assistant to guide her learning agenda, to help her figure out what to learn and on and on. Made me feel somewhat inadequate as like a human in terms of all the things that I could be doing with this. I think I’ve forced myself to increase my usage in certain ways since then.

Diane Tavenner: Have you watched yourself changing at all because of these conversations?

Michael Horn: Yes, yes. So I will say and, and I, I’m curious if you’ve had the same thing, but number one, I search on Google a lot less than I used to.

Diane Tavenner: Almost never for me,

Michael Horn: So the only reason I don’t is because I have access to the Gemini Advanced, I can’t remember what they call it, but the advanced AI search feature, right. Which has a lot of the same qualities as ChatGPT or I guess a lot of folks use Perplexity for search because of the way it answers. Right. Your queries. But yeah, in general, I am not using semantic search really at all. I’m almost exclusively using AI to try to understand certain things. I will tell you, I was trying to get a much deeper understanding of the nursing and healthcare shortages across the country recently. I had Google Gemini and I’m blanking on the exact product, but it’s their research product.

Create a, like a, it’s like a five to seven page basically briefing for me on it. Super interesting how it did it and it, and it resolved one of the challenges I have, which is when you sort of just do a raw search, you get like, oh, by 2036, this is the projection. And I’m like, no, I want to know now. I want to know by specialty and where I like. Right. And you can get that now. And, and it’s, it’s really interesting Diane, what about you?

Diane Tavenner: So similarly, the only time I find myself going to a traditional search is out of habit. And then I get there and I’m like, wait, why are you doing this? You’re going to get much better information. I’m using the paid version of chat right now.

Michael Horn: Yeah, that’s what I largely use. I should.

Exploring ChatGPT Usage Trends

Diane Tavenner: Yeah. Well, it’s interesting. One tip I’ve gotten from, you know, sort of insider is to, to actually cycle through them and use the different ones from time to time and see what you think. So I’m going to try to push myself to do that and not just fall into a single habit. Although we’re not alone. You know, in the last couple of days the numbers have come out about the number of people across the planet using ChatGPT and it’s extraordinary, like unprecedented. We might get to that in a little bit. But yeah, I find myself pushed by John in a lot of ways to just push my thinking on, wait, do you really need to be doing this? I keep asking myself that all day now, wait, do you really need to do that? Can, can an expert do that for you? Or, or in some cases things that I thought I couldn’t know.

I’m now saying, wait, I think that might be possible. Like how could, how could we get to that? And so I, I feel like it really, these conversations have pushed my behavior and change and with positive results.

Michael Horn: I mean, yeah, it’s super interesting. I’m curious, Diane, if you’ve had this question come up which is so we’re learning how to use it and I feel like I’m still very much learning right. How to use it in this way that increases productivity, efficiency, and the realm of what’s possible for us or me to accomplish. You as well, it sounds like. I guess I’m curious as, like, you think about that in the educational context.

Diane Tavenner: Right.

Michael Horn: If you’ve had reflections about, OK, so what does this mean at different levels of education? Where have you gone with that?

Diane Tavenner: Well, I think that’s where it starts to move out of my own personal excitement and curiosity given where I am in my life and whatnot, into the reality check on K-12 education. Because very few young people who are in high school or middle school are experts at anything. You’re just not an expert yet. And my, when I was listening to John talk and I was listening to Siya talk, I was like, but you guys are experts. So you have this set of skills and knowledge that enable you to use this tool as an expert for you. But novices oftentimes don’t know how to take advantage of expertise, so it’s not accessible to them or open to them. And what young people are doing in their lives and their learning is fundamentally different from what we are doing in our lives and our learning.

And so one of my big questions coming out of those conversations was like, OK, great, but what happens to, you know, the folks that I’m really focused on, the young people and, you know, ages, adolescence and in early adulthood, how did they, given what they’re doing on a daily basis and thinking about and trying to do, how does this concept of an expert work for them? And I would argue it doesn’t in the sort of raw form that we’re accessing it.

Michael Horn: So. Yeah, yeah, no, I, how do we always end up in the same place? OK, so, so there’s this notion, right, in learning sciences of novice versus expert learner by domain. And then there’s the second sort of, you can create the two by two. Right? So you have novice and expert on one domain or one dimension, and on the other you have unknowing versus knowing. So, right. You’re an unknowing novice. You literally do not know what you do not know. And you just have like very, very little expertise in this.

Diane Tavenner: Yeah.

Michael Horn: Then you become a knowing novice, meaning you actually start to understand the realm of things people in the field and domain do and all the things you don’t know.

Diane Tavenner: Right.

Michael Horn: And then you become a knowing expert. Right. Is sort of the continuum and you still have a sense of like, I know how I learned to be an expert and I know the sets of things that I did and you tend to be a really good teacher when you’re a knowing expert. And then you become an unknowing expert, you start to automate 75%, I think Bror Saxberg tells us, of, like, the things that you, you, you, you sort of do on a daily basis, the underlying skills and things of that nature, and you just sort of forget about it. Right.

It just fades into the background. It’s automated. Right, right. And so what’s interesting, I think, as I reflected on this, is from my perspective, um, the, and I like the way you just said it, the raw form of these tools is probably most useful in the knowing, like, circle of that. So I’m a knowing novice, but I at least know the questions to ask. Like, I have a certain set of foundational skills and knowledge in the domain that allow me to, like, use the AI as that personal assistant to help guide my learning and like, you know, be curious to interrogate an answer it gives, et cetera, et cetera. I think it’s also true that probably when you’re a knowing expert, that it’s really useful for boosting your job performance. And, my hypothesis, Diane, is that this is why we’ve seen so many studies come out that suggests AI is most helpful to the lowest performers in the world of work and least helpful for the biggest experts.

Right. And those, I think are your unknowing experts, is sort of my guess. And again, and then on the unknowing novice side, I think it’s probably not super useful either, or frankly, sometimes maybe even misleading. Right. In certain cases. And so I think you need really highly curated learning tools. Right. If you’re going to be using it for individuals like that.

Now this gets. Maybe I should pause there for a moment because we could talk about equals. Yeah, yeah.

Diane Tavenner: But I love it. I love thinking about it using that framework. And I, you know, I am very concerned about the unknowing novices because by definition, that’s who we’re getting and serving. I mean, that’s, that’s a natural state for them to be in, in their life and their developmental journey. And so, and what I, I think you, you know, I am not a fan of chatbots. And from the very beginning, you know, when people were getting so excited and their first kind of conceptualization of how this could be used is we’ll just basically take the little box and we’ll just put that little box all over the place, everywhere. And then people will just come and they’ll just know Ask the box.

Questions. And then it’s solved. Everyone’s just going to learn. And in, in my mind, that’s a, you know, that’s a chatbot, and that is not going to work for the unknowing novice. They don’t even know what to say or what to ask. And this is proving to be true. I, I’m, you know, in a lot of conversations, looking at a lot of data where people have essentially chatbot data for young people, and you will not be surprised to learn that they write weird things.

Improving AI for Learning

Diane Tavenner: They write in, you know, like, short, incomprehensible things. They’re not asking questions. They get frustrated in there. They’re yelling at it sometimes because they feel like it’s supposed to help them, but it’s not helping them. And so what I have a little hope about is we’re a little bit further along now and people, I think, are starting to be able to imagine beyond a chatbot. So how do we actually, I think this is where you’re going, like, how do we actually use AI in products for younger people, unknowing novices, and even the emerging, you know, folks to help structure their learning and help to teach them, but not just to put this open box there for them that they have no access to. And so there’s a little bit of promise on the horizon as we get a little bit further into it and people start to process and think about how it can be used. But, but to me, that’s the, that that is one of the big risk and I think one of the reasons that you see the folks who are very skeptical about it, and we had a number of them, we talked with a number of people,

Michael Horn: A lot of skeptics on our show. Yep.

Diane Tavenner: Yeah. And so. Yeah, yeah.

Michael Horn: No, I think that’s. Yeah, that all lands for me, Diane, where you’re going. And, and I guess from my perspective, it does point to something which I think was true in the era of Google as well. Which is it. It’s not the case that we don’t need to learn knowledge. Right. Or at least what I would call foundational knowledge. And I thought Rebecca Winthrop was really good on that concern.

She sort of said, I’ve been the skills person and now I’m worried we’re going to forget about the knowledge. And it goes to something we talk about all the time, which is like, we have to get away from the tyranny of the OR in this education world. This has to be an And conversational. And I think Foundational knowledge is really critical, right? To being able to use these tools in, in ways I think people are really interested in creativity right now turns out to be creative, you actually need to know something and then to be able to break the rules, right? And like interdisciplinary is really important then, but. But you do need to have some foundational knowledge.

Diane Tavenner: I wanted to go here next, like the direction you’re leading us because I think both Rebecca and Jane surfaced a really important conversation about skills and knowledge that you’re bringing up. And I would argue, you know, folks who’ve listened for a long time know that I’ve always organized skills, knowledge, and then habits of success and in the habits realm is curiosity. And so I’ll talk a little bit more about that. But what I’m interested in. One of the things that I notice that often happens in these conversations around learning is that skills and knowledge get really. They’re not distinguished from each other. They are put into the same category or bucket. I think it might be worth just unpacking a little the difference between skills and knowledge and habits and for, for a conversation for education.

Because like you just said, Michael, knowledge is, let’s say for the purposes of our dialogue and our conversations, it’s, it’s the stuff. It’s like the names and the definitions and the dates and the, you know, the, the theories and, and those sorts of things. And then concepts are sort of a little bit of a bigger idea of knowledge. Skills are the things that literally you practice and can improve upon and that go, you know, are more universal and stretch across and use the knowledge, if you will. And so just to be very concrete about that, a skill being, for example, to a high level skill is to effectively communicate or to analyze or to solve a problem. These, they’re people’s favorite skill that they like to talk about is critical thinking. Critical thinking actually has a whole bunch of skills.

Michael Horn: Many, many skills. Yeah, yeah, right.

Skills, Knowledge, Habits: Learning Framework

Diane Tavenner: And many of those that I just named you, those are the big high, you know, domains, and they have multiple dimensions. But think about things that you can actually practice and improve. And so if we call back to Jane’s conversation in the writing center and her as a teacher of writing, I mean, skills, skills, skills. So much of what she was talking about was skill development, right? Knowledge. I mean, people have been worried about knowledge ever since, for forever. Because, you know, can you just look up a fact or can you just look up a date or something like that? And, and, and then I think the third category, and then I’m curious about your thoughts about this, that I like to distinguish is this idea of what we would call habits of success. And this is sort of a big catch all for everything from like, how do I emotionally regulate myself? I’m calling back to the good work that the building blocks framework that sort of identifies at least those habit, what I call habits that are related to school success and learning success. So everything from, you know, can I emotionally regulate myself? Can I have, you know, can I be in relationship with others? And then all the way at the top of those building blocks has always been my civic identity, self direction, which, you know, has been a huge center point of how I think we need to structure learning.

And curiosity. And curiosity has always been fascinating because super hard to measure. No one really knows how to teach it or if you can teach it. But what I think is happening right now is illuminating the critical importance of curiosity and how our system of learning and education has sort of rung curiosity out of young people. And it might be the most valuable skill habit in this.

Michael Horn: You’ve anticipated me again, I think when a student asked me, you know, one of the students at Harvard asked me recently what I thought was the most enduring skill. But, but habit is how, you know, you and I have generally classified it like it would be in a world of AI. And curiosity was the answer that I had for a couple reason. One, I think when you are getting answers or interacting with whatever the form factor is, being able to interrogate it and knowing how to ask and not settling right is going to just be like baseline importance. Right. And then two, though, I think like in a world in which the rate of change is accelerating in terms of the world of work, this curiosity as a gateway into learning and upskilling, et cetera, et cetera, et cetera becomes really, really important. So on multiple levels, I think curiosity is critical. The other habit I’ll name Diane, from the building blocks goes down to if like I want to say it’s not the bottom layer, but I think it’s the second layer, you’re going to correct me, but which is self awareness, I think is the one or something like that, or self advocacy or something like that that you all have and you can redefine it for me if I mess it up.

But I think this is like really knowing yourself and like the strengths that you bring, frankly, not just your strengths, but also like what you suck at, the things you don’t want to do. And I don’t think I’m going to be stronger about that. Schools today do a very poor job of helping individuals learn around their self awareness. And like what, what, like what? You know, what superpowers do I bring in? What are my weaknesses? Where should I walk away from things? I get why that happens. We don’t want to give up on an individual too early from developing something that could be a strength.

Diane Tavenner: Yeah.

Michael Horn: And I think as you get out in the real world, you realize that life is lived with your competitive advantage and the things that make you unique and, and not trying to remediate your weaknesses constantly. And so I think in an era of AI where, look, AI is going to be the new expectation in the workforce. Right. Like you, you don’t use it. What? Is going to be sort of the question. But you can use it to really effectively craft your career in a way that you couldn’t before, because now you can let it do the stuff you don’t want to do. Lean into the place where you can add unique value. Well, that requires self awareness.

So those are the two habits, Diane, that I think are very, I mean, I think obviously all the habits have enduring value, but the curiosity and self awareness, I think are really important.

Diane Tavenner: I totally agree with you, Michael. And I think there’s a couple of other things to, like, illuminate here around why I think we don’t do a good job of sort of nurturing young people into being, you know, really aware of themselves. Well, I, I just don’t even think we try to do that.

Michael Horn: Yeah, I don’t think that’s been a goal. Right. Of the schools.

Diane Tavenner: Right.

Michael Horn: And just so people do not misunderstand us, like the report card you got a C in social studies, like, that’s not what we’re talking about. Right.

Diane Tavenner: So, no. And, there’s a couple of things going on there. One, for all the stuff we’ve talked about over the years on this podcast, we actually don’t give young people and their families very honest information. And by honest, I mean information that they can truly understand and interpret that tells them and gives them feedback about where their strengths are, where their weaknesses are. The grading system that we have is woefully inadequate in terms of giving actual feedback. And our testing system is, quite frankly, as well. You know, when, when, when I get a report of my child’s state testing and I have a hard time reading it and understanding what it says, you know, that though, like, this is not working for families, you know, we’re not telling them what their young people are good at or not good at. And to be fair, one of the challenges with that is there’s a base level of skill and knowledge that I think all young people need that it doesn’t really matter if you’re good at reading or not.

We need to get you to be good at reading. Like you need to be able to read. And so there’s not kind of a picking and choosing. There is as there will be later. But I want to jump in on this idea of. Because this is a lot of the work that we’re doing right now about like knowing yourself. And I think the approach that we’re taking is just from like working with David Jager and then learning scientists like we will be good at work, our work, our career, our vocation. It’s pretty simple.

Passion Fuels Career Success

Diane Tavenner: If you like it, you’re probably going to be good at it. And the reason is because if you like it, you are more curious, you are more willing, you are more interested, you want to do it more, you practice it more, you get better at it. And it’s a self fulfilling prophecy. And so one of the activities that we ask young people to do is to really look at the things you will be doing in a job or a career every day. What are the top 10 tasks that you’re going to do day in and day out and are a big part of it. And then very honestly self assess do I like doing those things or not? And it’s really shocking how it’s hard to figure out what people do in a job every single day. It doesn’t really come through in job descriptions or in most of the tools that young people are given to think about career and jobs. And it’s actually a thing they really wonder about, which is why they want to talk to people who are in the job to ask them what it’s really like.

So they have an intuition around this. But that like assessment of and that, that just realism about do I like doing this? Because if I don’t I’m not going to be very good at it. And so I should pursue the things I like doing. And, and I think that gets translated into people saying follow your passions, which is a wholly unuseful thing to say.

Michael Horn: Unuseful. Yeah, completely. Yeah.

Diane Tavenner: So, let’s make it more concrete for them. And so I, I would argue, you know, that’s, I’m with you on that. I would say the skill that goes into building is reflection. And we don’t as a general rule spend nearly enough time teaching young people how to truly reflect and then use that reflection to propel them forward.

Michael Horn: Yeah, it makes a lot of sense. I want to stay with this, just because the point that Jane made specifically right around this was that the process is what’s important in writing. It’s not the product. In that case, it sounds like for those of us in New England who have, you know, who had Bill Belichick as coach here for however many years, it’s the process, not the right. And we became a big mantra here. I think that’s true probably in the Bay Area with the Golden State Warriors, too, but. But like focusing on the process as the learning. I think this is interesting also because reflection is built into that.

And I’ve heard some. I want to try this out on you. I’ve heard some people say, you know, so. So I think part of Jane’s answer was like, I still need you to do the writing. And maybe it’s. Some people I’ve said in class said that AI is not there and see the process. Others I’ve heard say, like, do the writing. I believe you’re going to be using AI to do it, but I want to see the questions and prompts and things like you’re asking it to do as a reflection on the process and, like, how it changed, you know, how it changed the final product, if you will.

I’m, like, curious as a, you know, someone who taught writing, like, what you think of that as a mechanism and does that make sense to you?

Diane Tavenner: Yeah, it does. And. And you’ve taken me down another path I want to ask you about, because there’s all these legislator legislation, state bodies now that are trying to pass AI legislation, and it’s this full range. So I’m curious too. To go there with this. Texas is top of mind for me right now. So as a. As a former writing teacher and, you know, I am who I am, so you’re not going to change that.

Like, I think it would be silly to try to banish AI from the writing process. And let’s be clear, that’s what some people want. So there’s a.

Michael Horn: Let’s be. Let’s be clear. I teach at Harvard, where there’s a policy that unless your instructor explicitly says you can use AI, the default is no. I think that’s insane because these people are going to go into the world with an expectation to do it, so we might as well make it intentional. So I’m on record there.

Diane Tavenner: The further education, taking one more step away from the actual real world and the world of work and saying, you know, What? We’re not going to prepare you for that. We’re not.

Michael Horn: Yep.

Diane Tavenner: And so. But we’re, we’re aligned there. So how would I, as a writing teacher, think about it? Well, I mean, in my experience right now, I’ve watched young people who are not skilled writers try to use AI to write something for them. And first of all, you can tell immediately, number one, that they didn’t write it. And two, it’s not good. It’s not very good. And so I think that’s where I would start is just being really talk about feedback, being open and honest and like, let’s actually dissect what happens when you just try to put in a basic, simple prompt and get something out that is like. But quite frankly, this is also what school does, is just put something on paper and turn it in versus actually building a skill.

AI Bans

Diane Tavenner: And so I think there’s a big opportunity now for, for great teachers, great instructors. And I actually think we heard Jane talk about some of her strategies here to help young people understand a tool that is now available to them and will be in our world, and how they can use it to not only build their skill, but improve their products and their outcomes. But that is going to require a whole new set of skills from them and muscles that they are not using and flexing in school right now because they’re incentivized to not do those things there. And so I think it’s very exciting and hopeful and optimistic, and that’s why I get very disturbed when I. I mean, there’s literally a bill in Texas right now that could very well pass that is going to say something to the effect of, you know, teachers in the state are forbidden from using AI in any teaching and learning.

Michael Horn: Yeah. I mean, I’ll be. Yeah.

Diane Tavenner: What. What is that?

Michael Horn: Right, Right. Every product now has AI, so. Yeah.

Diane Tavenner: Yeah. Yeah. Well, and then I’m like, you. I think who it was that some university was saying. Yeah, they’re. They’re literally going back to blue books.

Michael Horn: Yeah.

Diane Tavenner: And exams. I’m like, really?

Michael Horn: Yeah. I mean, I hear the same thing. I, It’s. I didn’t know about the Texas bill. I, I will be very consistent on this one, which is, I do not think making policy at the level of inputs ever makes sense. And I feel that way. We had a whole set of shows about the science of reading, and we were super clear about, you know, the importance of using actual, you know, like, actually following the research on this. Right.

And so forth. And I don’t believe in policy at, you know, banning certain curricular materials because I think it stifles innovation when you see leaps forward. And look, if we want to pass, you know, measures to create professional development so that the people coming out of schools of education actually know how to use these tools, teach science, reading, use AI, whatever, I can have that conversation. And I just, I think it’s a blunt axe. Even when I’m in favor of the spirit behind it, shall we say here, I’m not in favor of the spirit behind it. I think it’s a blunt axe the wrong way. It’s the same reason I, you know, feel that way around mobile phones as well.

I want schools to have the ability to take them away and not have them when it does not suit them right on the ground. I don’t want a policy criminalizing the teacher that found a good use for it and one person in the school disagreed and then all of a sudden it’s, it’s a thing. I, I just think that’s misguided.

Diane Tavenner: Well, that’s another bill in Texas too, so we’ll see what happens. I want to stay a little bit on this thread, but I want to go to something that, you know, you and I are both, I mean, our work has been steeped in personalized learning. And so you know, Ben Daley or Ben Riley.

Michael Horn: Yep.

Diane Tavenner: Ben Riley joined us and another one.

Michael Horn: Another of our friends, Ben Daley we鈥檇 say.

Diane Tavenner: But Ben Riley joined us and you know, pushed pretty hard on. He believes that the promise of personalized learning has, is sort of overdone adjudicated, it’s failed. And he believes that, you know, this, you know, the hype about, well, I shouldn’t put words in his mouth. Everyone got to hear him. Let’s say he’s a skeptic. He’s a self-described skeptic. And he did bring up this idea of personalized learning. It also came up, I think in our conversation with Julia.

Michael Horn: She was more optimistic about it, but yes. Yeah.

Diane Tavenner: So a number of folks talked about the idea of personalized learning and it seemed to me that there was kind of these two different like either like see, personalized learning is like, you know, this is just gonna, AI is going to go the way of personalized learning. It doesn’t really work. It doesn’t really personalize or oh, AI is actually going to. We’re still on the journey towards the vision of personalized learning and AI actually helps accelerate us in that direction and improve the possibilities. I’m kind of sticking up, you know.

Michael Horn: Sure. The extremes yeah, yeah, but, well, but no, I think there’s something to the way you did it though. Right? Because what I hear a lot of advocates saying is like, well, now we finally have the technology to do all the things we had imagined 10 years earlier. As though the technology is going to sort of automatically understand, you know, like what you’ve mastered and your working memory capacity that day based on what you ate and so forth. Like, and somehow deliver the perfect lesson at the perfect time. Which I think is essentially right. Sort of that techno driven vision.

Rethinking Personalized Learning Paths

Diane Tavenner: Of personalized learning, which was never sort of my vision and what we do, but that there is that version of that. And so it’s got, it just got me thinking about like, oh, OK, where are we with personalized learning? And what do I think about that? And is this, are we on the same journey or pathway or have we hit a sort of fork in the road? And does this change my perspective? I think maybe if you’re, you are one of those techno vision people, it probably does. It feels like a huge accelerant. For me I think it’s a powerful tool to continue down the path of realizing the vision that I have for young people, which is that people have always confused it as being like an individual kid on a computer, and that’s never what it was. It’s much more how do we use technology as a tool to prepare young people for the real world, for real life, for real skills. And it’s a very powerful tool, if used well, to do that. And then also what I get excited about now is how it can actually structure our system of education and create efficiencies and opportunities that I think have never before been possible. So I’m very optimistic about what it can do, probably more on that latter part than on the first part .

Michael Horn: Say one more beat. Like when you say in terms of what it can do on the system part, what does that look like in your mind? Or you know, sort of simple sketch? What does that look like?

Diane Tavenner: Well, like I’ll give you an example, you know, that I’ve been pushing myself to try to. OK, if I could design a school from scratch right now, what would it do? And that’s because I’m a nerd. That’s fun for me. So that’s like a pastime. And one of the things I imagine, let’s just talk about how a family might engage with school. So, I’m going to give you this utopian vision. But like, what if, you know, periodically you sat down with, with Your family and your girls. And you were able to say, you know what, over the next couple of years.

And you, you did this like, with technological prompts, like, over the next couple of years, what’s most important to my family about what my girls learn? And they’re, they’re different from each other. So I suspect you would have different things where you and Tracy would be like, well, this is really, these are my top goals over here. And these are my top goals over here. And of course we would scaffold that for you and we give you a menu to choose from or a list or, or perhaps some, you know, but we would ask you as a family, like, what’s really important to you? OK, like, check all the ones that you care about. Check the ones you don’t care about. And then this is like my analogy of how is school like, ordering a sandwich, you know, like, and then we would go through a series of prompts to be like, OK, well, let’s get into your family. Like, what does your schedule look like? Like, do you, you know, do you want a day, a week with your girls at home with you and your family? And they go to the building four days a week? Do you want to come at like 10 because of the way your family schedule is and go later? And you know, and I can imagine people starting to have a heart attack right now as I’m talking, like, oh my gosh. But I think if we really, truly went and could ask and understand the circumstances of every family, literally AI can do what humans can’t do, which is it can go and crunch all of that, and you can ask it to help you design what would be possible within the parameters of what the school can actually offer.

And not every family has to be on. Everyone arrives at 8:30 in the morning and everyone leaves at 3:00 in the afternoon. And one day a week we leave at 1:00. So, you know, like, we don’t have to do that anymore. We have technological capabilities that could actually bring a whole community together and meet their needs in a personalized way.

Michael Horn: I think that’s really interesting. So many thoughts going through my head as you say this. One, I think the importance of context of the individual. Two, look, not everyone will get every, like, we might be out of romaine lettuce that day and there’s trade offs, right? But the point is, and this is what’s always driven me nuts about the world of personalized learning is the word personalized learning as a noun, and implying that like there’s one way to like, oh, I’m personalized and you’re not.

Diane Tavenner: Right.

Michael Horn: Whereas instead of seeing it as like a verb or. Beth Rabbitt, I thought, did a good job in this chapter she wrote for us in this new edited volume, School Rethink 2.0 of like. It’s a series of strategies you can do to better meet learners with what they need next, right in their, in. In their learning journey. And at that level, like, I, I just, you know, Ben Reilly is a big, you know, he learned a lot from Dan Willingham, the great cognitive scientist. You know, Willingham talks a lot about, right. Like, if you put something in front of someone that is way outside of their, you know, zone of expertise, proximal development, if you want to go right, that they will get frustrated, tune out, if it’s too easy. And I see this, like, I see technology tools right now.

I will not name companies, but they’ve sort of bought into the, oh, it should be all whole class. And I see that, like, yes, they’re following the learning sciences, say, around reading and the importance of knowledge to build understanding, to do the skills right, et cetera, et cetera. But because every kid is doing, like, reading the exact same book from a teacher who’s following a script, right? Like my cousin, excuse me, my kid’s cousin, she’s like, I read this three years ago. This is the most boring thing. Like, I literally want to jump out of the window. And she disengages, right? And I suspect the truth is on the, on the other side, that, that to me is insane. And so less. It’s like magical, technocratic, personalized learning and more, hey, this is a strategy with a set of tools.

We have to better come closer to meeting different family needs. When I hear the structural one you just laid out, my mind goes to the, you know, the world of education choice, right, where we’re starting to see that with education savings accounts, where these are the experiences that families are constructing. I think what’s difficult right now is, like, we know how hard it is to arrange summer camp as a parent. We did a whole episode on that. We’re kind of asking parents to now do that the entire year. Yeah. So to your point, how does AI maybe services maybe different kinds of bundles, right? Like, so you, you walk into Subway and we’ll go with your analogy, right? And like, they kind of tell you, hey, Here are the 10 best combinations of the stuff. But, like, if you want to custom build it, you can.

Yes. I kind of think that’s like, we have a Rebundling along these different, like the most common, if you will, set, set of customizations or personalizations.

Diane Tavenner: I just want to pull a couple of those threads and just be pretty explicit about them and why I think this is important and addresses some of the big challenges we’re seeing right now. So one, I think so many of the battles we see across the country right now are people who, and I’m talking among parents and you know, we’ve talked about school boards and all those things are about people who want a certain thing for their child. And because the school only does one note for everyone, if the school’s doing the thing that they don’t want for their child, they then therefore try to change it for all kids.

Michael Horn: Yeah.

Diane Tavenner: And this is causing massive, you know, fights and battles. It’s very cultural. I’m going to keep picking on Texas today because I’ve just spent a bunch of time digging in on them. I mean, they are taking back control over the curriculum so that at the state level they can really control. And this is very much about cultural, like what young people learn or don’t learn. In response, I think a lot of this. And so to the extent that we could personalize at least parts of education, I think it tones down some of this. Like what, what is true for my child doesn’t have to be true for your child.

And they can both get what they need without compromising the other child. So, there’s a benefit there to it. And then I would say, I think you’re absolutely right. There’s a ton of people who are really worried and against ESAs and vouchers and things like that because they feel like it’s the unraveling of our civic society and we won’t have people together., you know, building society together will be, you know, further in our, our camps or our bubbles and whatnot. And I think that, you know, the vision I just painted for you of how folks might get into school, I agree with you. There would be trade offs just because you marked it on your sandwich sheet. You know, that day we happen to be out of pickles.

Like, it’s just not going to work. There’s no pickles. You don’t get those. Sorry. You know, but I think people could handle that and accept that more in the good of the, for the good of the community and the group if they felt like they had some control. And I think the problem with our choice system right now in America is it’s so blunt. It’s like, you can pick a school. That’s it.

And that’s such a massive, we need a scalpel, not a big blunt instrument, you know, like.

Michael Horn: Yeah, no, I agree. So, I think models like this are emerging. Right? Like Alpha schools. It’s a private school originated in Texas, so this is a bright spot. And they had the two hour learning model, which is essentially as I see it, Diane, like what homeschoolers have done for years, which is like we learn the academic, you know, basically content and some of the skills right. In two hours. And then we like to go out in the world and do real world immersive experiences. They just are using the AI in a very, I think, developed way.

Diane Tavenner: Right.

Engaging AI-Powered Learning Tools

Michael Horn: To offer that two hour learning sequence. And then frankly, this is the other piece of it that I think is going to be important if we’re going to need to think about motivation a lot more. So if we build these curated AI tools that can work with the unknowing novices, we’re going to have to connect it in ways that get them engaged into actually wanting to learn these foundational knowledge and skills, which we should be doing anyway. But Right. Like I think and, and, and we’re not like that’s the evidence of the chronic absenteeism, disengagement, et cetera, et cetera. But I guess I think that’s like, we really need to think about how to create meaningful engagement. And I think this notion of, hey, you can learn sort of your nuts and bolts, your foundational stuff that’s critical much more efficiently and then get to do much more engaging work because there’s a connection between them.

Diane Tavenner: Yeah, yeah.

Michael Horn: Should be, should be part of that answer.

Diane Tavenner: 100% and I think purpose. And so that’s why you go back to personalizing people’s purpose. Like why are you here? What I mean it’s, it’s to your work, Michael. Like what are you hiring school to do for your family? Right, yeah.

Michael Horn: By the way, that is the best question. Yeah, sorry. When people ask me what should I do about my kids school, Tracy tends to jump in the conversation because she said he’s going to get towed in, in the weeds. Let me just tell you, like, what is the thing that your family can’t or, or, or isn’t able to do that school can do for you? Right.

Diane Tavenner: Like what are you, what job are you hiring it to do for you? And, and it will be a different answer for different families. So I want to keep us going.

Michael Horn: Sorry, we’ve deviated perhaps.

Diane Tavenner: I do want to acknowledge that I’m thinking, thinking about this, this infrastructure benefit and this is what Julia was trying to get to, I think, in her points. And this is a vision that she sees. And so it’s interesting to go back and think about some of the comments that she made about it. Michael, One of the things that surprised me honestly was that basically everyone we talked to, like these AI isn’t for kids who are under 18 right now.

Michael Horn: Oh yeah, that was fascinating. Were you surprised? Were you surprised by that?

Diane Tavenner: I was surprised by it. And so, you know, at least now the adults who are thinking about this, working on this, we’re very much focused on the adults that are teaching or doing things for young people, but not kind of a direct use for young people when we push them. They did talk about, you know, while it could be embedded in products or maybe, maybe not.

Michael Horn: Well, I mean, it is, let’s be honest, right, everyone.

Diane Tavenner: Yeah, but that was shocking to me and I don’t know why, why did that shock me?

Michael Horn: Why did I, I was super shocked as well. I mean, I think obviously, right, Privacy and some of the really detrimental impacts of social media and these consumer companies are clearly part of what’s going on here. I think that caution is good. I do believe, despite what I just said, that I don’t believe in bans on, at the policy level of mobile phones in school. I do believe a lot of the John Height research, I find it compelling that social media, specifically on the smartphones, has led to a bunch of antisocial and problematic mental health outcomes and disengagement. So I think that’s a lot of what’s going on here, Diane, is sort of my guess. And like they also, I think we need to also be honest that kids are using these tools.

Like we are not a huge screen time as, you know, household. And my kids have certainly had experience with ChatGPT. They have certainly used it for many things. That is certainly how they search at this point when they want to prove a point to me about something.

Diane Tavenner: Well, and you know, my, my kiddos are, you know, a decade older than yours and they’re early in their career and it’s, it’s. Well, one of them, it’s what he does all day, every day for his career. But the other one is literally working around the clock to make sure that he is becoming expert at using it as an early career professional because he feels like if he doesn’t, he’s going to be, you know, pushed out of the job.

Michael Horn: It echoes, you know, what Matt Siegelman from Burning Glass Institute has found, which is that AI is actually used more in sort of marketing, communications, professions like that than actually even sort of coding heavy parts of the workplace. Which is interesting. It’s not what, it’s not what I would have expected.

Diane Tavenner: Yeah. Yeah. That is fascinating. You brought up two things that I’d love to touch on. So. And we can decide where to go. First one is this idea of like AI being embedded in products. And I actually think it’s worth us sort of surfacing.

What does that even mean? And what does that look like beyond a chat bot, if you will? What are we seeing? You know, you know, still feels like it’s still early, but things are moving so fast it’s not early. So anyway, that one. And then the second one is this idea of, you know, Julia brought a very real fear about the loss of, potential loss of social connection. And so I want to come back to both of those. Where do you want to go first?

Michael Horn: Oh, we can do, we can do embedded products first. Embedded product for 200, Diane. So, yeah, what do you, what do you, what are you seeing?

Diane Tavenner: What are you embedded?

Michael Horn: Yes. We have not yet, we, we are not yet. We have not yet been replaced by AI doing our voices. But what are you, what, what do you, what are you seeing out there in the market? As frankly someone who’s building and I think using AI yourself in the product, but not, but not leading with that.

Diane Tavenner: No. And so maybe that’s the good place to start. I see a couple of different categories. So one is there’s folks who literally jumped out of the gate immediately and labeled their company, you know, AI. AI is in the name of the company somewhere. They are AI forward, they are AI first. They are like and what I find with those is many of them weren’t even sure what product they were building, but they knew they wanted to build an AI product.

So it’s sort of like a. AI in search of a product kind of origin. And yeah, I think, I think what I see over there is like people who kind of started with a chat bot in some sort of realm and then they’re maybe like evolving it over time because I think they’re probably getting feedback that great. A chat bot in a specific area is not that super helpful. But let’s, let’s name some things like that. There’s like companies that are like, we’re going to provide, you know, AI driven mental health supports. So we’re going to train A model to essentially be a counselor, if you will, that you know, can engage with and interface with young people. There’s AI tutors obviously in reading and math, you know, all across the board.

So, I see that as one category. I think the second category is, I hope it’s a category, I think it’s where I sit, which is having a very clear vision of what we want to do and why we want to do it with our product. And then we, on a sort of case by case decision grid, decide if AI can be useful or helpful for this particular part of that. If so, how are the trade offs worth it? And then decide where we’re going to strategically use it in the product itself and then also in our, in our work. And I would say that the in our work part is much easier and kind of a no brainer because there, there’s an efficiency tool and things like that. So, so that’s, I do think there’s a category of that. And then I think there’s a lot of people who are existing products and existing companies, you know, this is the majority, they’re not startups and they’re having to figure out how they get an AI strategy with the products that they have built that didn’t necessarily have any element of that.

So I don’t know, do those.

Michael Horn: That feels like a pretty good way to categorize the market to me as well. It’s interesting in our opening episode we had this dichotomy of student facing versus teacher facing. And as I hear your reflections on that, like that sort of cuts across those categories in interesting ways. I think both are like interesting ways to view the market at the moment for different reasons. And, and, but, but the way you just categorized it I think is largely what I’m seeing. I would say the market in terms of funding startups is moving away from the first category being the thing. You know, there are a couple home runs in that space, right? Magic school that is used by millions and millions of teachers, right to lesson plan and dramatically make their lives more efficient and by the way, for them to personalize for kids that maybe they were struggling to reach. So, you know, really cool boomed out of the box.

I think you’re right. The majority, I think, are now increasingly sitting where you are, which is how is AI an enabler of something that we’re trying to effectuate here, right? And then I think what you see is that, yeah, the large incumbents, if you will, they are using AI in different parts of the product stack to enable different things in different ways. Right. And in line with the way that they currently come to market or operate. I don’t think that they’ve used it to overthrow right what they’ve done. It’s more as a amplifier of what they’re doing.

Diane Tavenner: Yeah, so I lied. Let’s not go to the social connection yet. Let’s stick with that right there for a moment. Because one of the big things I keep wanting to ask you about is we’re having these conversations like, OK, step back to your work around disruptive innovation or innovation. And we’ve had these conversations before of where an innovation sits. Like walk me through where you place AI in.

Michael Horn: Yeah, that’s great. OK, so I, I think I’ve said this before on the podcast, but like fundamentally, AI is a technology enabler that can be used to sustain, which is what we just outlined the existing companies have been doing or to disrupt by fundamentally creating something that is dramatically lower cost, more accessible. Right. And serves people who don’t have access, which is what you’re trying to build. Right. In terms of this guidance and sort of understanding who you are and charting your future. Right. System or tool.

And so that. So again, it’s sort of. Yes.

Diane Tavenner: So AI is, the big category can be right.

Disruptive Educational Innovation Emerging

Michael Horn: Can be both. Right and, and so but here’s like an interesting thing in that which is back to the conversation we had earlier of the education savings accounts world and not just school choice, but education choice and like in many, you know, in 63 different flavors of ice cream or whatever it is. If like that is growing share, I don’t know how big it is, but that’s going to be a very different distribution channel into market with the eyes potentially helping you right. Figure out like customize for you. Are the existing companies, like, they don’t that those aren’t their customers today. This could be, I, I guess, Diane, where I’m starting to think is like, if I, if we truly move into that world, right, I as a family can stay in the district school, but like I might be then like losing out on anywhere from 7 to $16,000 in an education savings account. And now all of a sudden it has a cost to me to maybe take this. And so now like we can actually move into a world where there’s actual disruptive innovation of schooling, not just disrupting class.

Michael Horn: Right. For the first time in our country’s history, since 1930 or 40 or something like that. And then like that opens up all sorts of disruption opportunities, that’s into the market more broadly. Right. Like right.

Diane Tavenner: Technology I hadn’t thought about but this idea that you think families don’t put a price tag on like a public education? They do about it and so now when they’re staring at well like I get nothing over here if you will, because it’s not quantified in a dollar figure. But over here I get to spend some amount of money I had not thought about.

Michael Horn: I don’t know, I’m super curious is what I will say. Diane. Right. But like it it if you stop holding public schools hold harmless as most of the ESA, maybe all the ESA still do, at some point that’s not going to continue. Right. Like at some point you’re going to have to do what they did in charters and take money. At that point if like families are going to have real trade offs that they’re wrestling with, I think and making choices for their kids. And if there’s a series of services or products or things like that.

Right. That like dramatically help you get what you need for your kiddo in the context of your family environment, that opens up like a mind boggling number of possible disruptions in the market, I guess is sort of the bigger point. And AI look, it is not marginal, zero marginal cost, like sort of how we thought of the Internet before, which itself wasn’t because of distribution. But like it it is you are able to build stuff with dramatically fewer resources than you were. And so if you’re starting from that point and you’re not contending with an incumbent that has a huge advantage in terms of distribution in this world, what does that open up? I, I, I think it could open up a lot of things and, and incumbent, both district and incumbent, like large curriculum players. Right. So yeah.

Diane Tavenner: Right. What’s coming to my mind right now as we started this podcast, as people have heard us say a thousand times at the beginning of the pandemic, because you and I thought that the, it.

Michael Horn: Could be this opening yeah.

Diane Tavenner: Could be finally the thing that really broke it open and disrupted education as we know it. We both admit we were wrong about that. So here we are, season six, still hoping, but now talking to you about this and this is why I wanted to ask you that question is AI, I mean you seem to be making a case that it could.

Michael Horn: Well I think it’s part of the narrative. Right. And so it’s like, I actually think in an interesting way though, the pandemic will be part of the narrative too. Because it dramatically increased the number of families consider these options. And I think led to, yes, ESAs, etc were bubbling, but it dramatically increased the openness. Right. Or the desire of families for that adoption. And so I think all these things come together and I, like, let’s, I’m not ready to make a prediction, but I think it opens us up to something that could be very different.

Um, yeah, like a very different moment. Put it, put it that way.

Diane Tavenner: I think what’s interesting about that, when I think about the scope of history and, you know, my kid is a big history buff, and so he’s, he always says, like, what gets lost when people look back in history is that they think something happened really fast. But if you really look at the history, it happened over 60, 70 years. And those were kind of painful years for the people who were living through them. Right. There’s a lot of, like, churning and disruption and whatnot. But then we look back and we’re like, oh, that happened in like a minute. You know, and so I feel like living through, you know.

Social Connectivity and Dislocation

Michael Horn: so the dislocation is, it’s part of it. It’s uncomfortable. Maybe that’s the gateway into the Julia question of, like, how will it impact social connectivity? I’ll just jump in with my thoughts on that for, for what it’s worth, Diane, which is I, so I believe her fear is real. I’ve seen, but I’m, you know, I’ve seen some people say, like, really is in response to the episode. I’m, I’m actually not concerned about it emerging, though, in an education use case, as in, I believe the reason sort of the individualized, personalized learning version of the world didn’t come to pass and would never come to pass is like, people, like, being with other people and sort of that experience is really important. And a tool, for example, that is giving you career guidance to stay in your lane is going to be really useless if it doesn’t connect you to real individuals at some point in the journey. And the reason for that is the way we get jobs is through our network.

Diane Tavenner: Right.

Michael Horn: By conservative estimates, over 50% of jobs are through your network. As high as 85%. Right. No one really knows, but it’s somewhere in that range. So a tool that does not at some point push you out into the real world and connect you to real people in my mind, is not going to work. And so I, I hear Julia’s fear of, like, well, we may have the wrong metrics and policy around these things. Yeah. But at some point like people are going to be like, this thing is useless, it is not connecting me to real people.

And so I’m less worried in the education use case. But I think she’s right. In the commercial use case, these companion bots in effect, right. Anthropomorphic, as she says, identities of AI, you know, are, are, are, are a real concern. And so I think she’s right to worry about it. It’s the part of the social media narrative into this one that I think we should be worried about. I don’t know where it goes. I, I will say I’m, I’m not against those, you know, real world simulations and things of that nature as part of the learning ecosystem.

I do think it does ultimately need to connect into the real world of real people as part of that continuum. Right. And so AI, I think can be a really useful tool for creating the individual simulation where you learn to work something in the privacy of your own home. And you, yes, like you are less afraid to ask a question because of social, you know, in my case, like what an I banker do when I was a junior in college. Right. Like I would have done that, use that. And at some point then it has to connect you into the real world in a real world experience. So, like I’m less worried about her thing in the educational context, but in the world of loneliness and social media and AI filling that void, I think that is a very serious concern and it will ripple into our world of education and impact our schools.

Diane Tavenner: Yeah, that all resonates with me and where I go with it because, you know, I can’t help it as the practitioner is, well, what does that mean for our work? And for me it reinforces the idea that, and what I think the promise of personalized learning is, which is we actually give more time. In a well designed, like elegant design of a personalized learning experience, there is more quality time for people, adults and young people, young people and young people to be engaged in meaningful, authentic work. You know, what I’m going to call know myself work. Like the work, there’s nothing more important than knowing who you are. Building a healthy, developing a healthy identity, developing a healthy self. And like this is what we could be doing in education through like, go back to what David Jager talks about, like what do young people care about? They care about status and respect and there’s very precise definitions around that, but in their community and in their peer group, and it comes through earned respect. Like I do something that contributes to this group. I make a meaningful, you know, contribution that’s respected by others and therefore I am, I’m given sort of status in the group.

And that all happens when you’re doing project based learning, real world learning, you know, coaching, reflection, self development, that’s the stuff we should be doing together in person. And then personalizing the knowledge acquisition and some of the skill development so that I can come and access that and be a part of that group. I think in an elegant personalized learning model. And to me that is prophylactic against the fears of what would happen in the commercial world. And quite frankly, the fears that exist right now around social media and the damage it’s done if young people were building healthier identities outside of that world, that’s, that’s how they can resist, you know, the, the perils of social media.

Michael Horn: It’s well said. I think nothing is inevitable in this part of the landscape. And this is why I think it is so important that the educators, education entrepreneurs in the world that I just sketched out of a world of ESAs are super intentional about creating those opportunities. Those opportunities could be in the school communities where kids are coming together. It could be in connection with the community organizations around you. And I think there’s a, you know, there’s this big debate going on of like, hey, we need more career technical education schools. They’re really expensive to build. And then someone says, oh, but they’re cheaper than sending someone to college.

That’s a misfit for them. And you’re like, actually there’s like a kind of interesting middle ground of like leverage all the infrastructure around us of employers and companies and community organizations, et cetera, where people can actually plug in. And you’re right, like that foundational work that maybe will be a little more solitary around foundational knowledge skills so that you can actually come in there, you know, being able to contribute in some way. But those are all connected and I think we have to be super, super intentional about it to ward off, sort of ward off the dark side of that story.

Diane Tavenner: Yeah, we scheduled a long time because we knew we were gonna go long.

Michael Horn: Can I make one more point, can I make one quick other point on this? Yeah, just it’s one of the things that Ben Riley hit over and over again was that AI does not in fact think like humans and therefore will be less useful than we think it is because it does not think like us. To me, that’s a bit of like a, there’s a word for it that I’m not. It’s not coming to. Truism is not the right word. But it’s sort of like, yes. It does not think like us.

That doesn’t mean it can be. Cannot be useful to us. Right. And so that’s the parsing I would love to pull is like, I think it actually can be very useful as long as we understand the intentionality behind it and we’re clear around that. Not in a pie in the sky way or not in sort of a technocratic, oh, we just mix in technology with existing systems and models and poof, it magically works. I don’t think that will happen. Right. I do think we have to have intentionality with what we’re doing, with what the outcome we want from it.

Does it map on to learning sciences? Does it map on to how we build creativity? Curiosity, or at least not stamp curiosity out and sort of the schooling forms, if you will, that exist in the future. So that’s just like one other thing that I thought was worth reflecting on.

Diane Tavenner: It is worth it. And I might just say, and hold me to it. This will be my last thing I’ll say. But you reminded me that one of the things that struck me from these conversations, and I think it’s because we’re still really early, but like, everyone is looking at AI through their particular expert lens and we didn’t get a lot of, like, broad conversation outside of people’s expert lenses, my hypothesis is because it’s still really early and people are just trying to make sense of it. And of course, you first make sense of it through how you see the world and what your work is. And certainly that’s what we saw with Ben, you know, and his kind of views and. And you know, what felt pretty narrow actually, you know, but then through all of our guests, I think we just saw kind of how it is relevant, specific to them. It’s made me try to push myself and think, oh, am I being really narrow? And how can I think more broadly and to be on the lookout for people who are thinking about it outside of their own specific domain.

But maybe this is where we need to sit for a while.

Michael Horn: I think to your point, like, there’s so much moving every single day, you know, like there was that study out of Harvard on the physics class, right? They had done the flipped physics class, however many years ago. It produced better learning, continued to do so, as I understand they used a tutor for active learning. It. It produced better results than people said. Well, it could be the Hawthorne Effect, right? It could be. It’s narrow foundational knowledge. Does it really do this? I don’t know. Like, it’s promising, and we have a data point on it, and it was a real RCT.

Let’s. Let’s watch. Right. Does it solve engagement? No. It doesn’t solve all these other questions?

Diane Tavenner: No.

Michael Horn: OK, so let’s just say what it does, and let’s keep thinking about it. No silver bullets. And it made me so appreciative of the series we’ve done here because I didn’t know what we would learn from our guests. I feel like I took something away from every single one of them that altered how I think about the landscape here in meaningful ways.

Diane Tavenner: I completely agree. And we sort of bring our processing session to a close. I will say I’m very grateful for, it stoked my curiosity and, you know, curiosity had been sort of sitting there at the top of the building blocks, and, you know, and I’m like, curiosity is back, and this is exciting. And so who knows where we’re gonna go with this? The only thing we know is we’re gonna go for one more show. It’ll be our season closer this year where we’re gonna take all the stuff we’ve just processed and see if we can distill it into some, you know, big headlines, big takeaways, you know, and.

Michael Horn: Wish us luck.

Diane Tavenner: Yeah, exactly. Exactly. Before we wrap, what have you been reading? Listening to, watching.

AI Amplifying Essential Skills

Michael Horn: Oh, can I do reading? I polished off Stephen Kosslyn鈥檚 鈥淟earning to Flourish in the Age of AI.鈥 So it’s relevant. Talks about how AI can, in effect, be a cognitive amplifier loop, he calls it, to the skills that are still important at a headline level. You know, critical thinking, communication, emotional intelligence. He puts in there. And then Angela Jackson’s 鈥淭he Win Win Workplace.鈥 So those are my two that I have finished.

Diane Tavenner: So we’re sort of falling into our oldest patterns where you’re reading really smart and intelligent books. And I’m blowing through Madeline Miller as I read 鈥淐irce.鈥 And now I’m doing 鈥淭he Song of Achilles鈥 in our run up to Greece. And here’s what I will say. Here’s the connection of 鈥淐irce.鈥

I mean, I just thought it was such an interesting, beautiful book about a female coming into herself and her identity and identity development, as a young woman and then a mother. And it’s just fun and fast, and I enjoyed it.

Michael Horn: That is awesome. Love ending it there. And, you know, look, if AI is really efficient, we’ll have more time to do the reading around humanity that we should be doing all along. So let’s leave it there. Can’t wait to be in person with you for our final episode of the season. And we missed a bunch. We know it. Send us all your hate mail so we can get smarter.

We appreciate you all, and we’ll see you next time on Class Disrupted.

]]>
Podcast: The Premortem on AI in Education /article/podcast-the-premortem-on-ai-in-education/ Wed, 30 Apr 2025 16:30:00 +0000 /?post_type=article&p=1014503 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode of Class Disrupted, hosts Michael Horn and Diane Tavenner chat with Rebecca Winthrop, a senior fellow and director at the Brookings Institution, about the impact of AI on education. The conversation kicks off by highlighting Rebecca’s idea of a premortem approach, which involves anticipating the negative impacts of AI before they occur and strategizing how to mitigate these risks. They identify key concerns such as offloading critical thinking, manipulation, and the effects on socialization 鈥 and consider how this technology might catalyze a rethinking of the purpose of education.

Listen to the episode below. A full transcript follows.

Michael Horn: Hi everyone, this is Michael Horn. And what you’re about to listen to on Class Disrupted is the conversation Diane and I had with Rebecca Winthrop. Rebecca is the coauthor of a terrific new book, The Disengaged Teen. She is the head of the center for Universal Education at the Brookings Institution, and she has helped stand up a global task force there on AI and education, which forms the basis for our conversation today. Rebecca brings forward a couple interesting perspectives that I want to highlight here. Number one, the importance of doing a premortem on the impact of AI in education. And as she said, a premortem doesn’t focus on the optimistic case for AI. It fast forwards the story to say, knowing what we know now, let’s get ahead of this and imagine the negative impacts from AI and then guard against that.

Second, in her mind, the big premortem risks to worry about are three things. Number one, we can offload cognitive tasks to AI, but as she said, the child development people don’t know what kids have to do on their own and what actually can be offloaded to AI without harmful consequences. Second, she worries about manipulation. And third, she worries about the impact to software socialization from AI. One thing I’m leaving this conversation with is鈥 Rebecca hopes I guess I would say that AI can be this thing that spurs us to have this national dialogue around the purpose of education so that we can really rethink what schooling looks like. Is that the way that this happens? Is it such a big shock that we’ll all come together and have these conversations? Or is it more likely that the real action around system reinvention or system transformation will occur from the grassroots? That is, as in individual communities, education entrepreneurs create new forms or systems of schooling that gain traction over time as more and more people migrate to them and we are left with a series of different systems that have a series of different purposes to them. That’s the question that I’ll leave thinking more about from this episode that you’re about to hear. I hope you enjoy.

Michael Horn: Hey Diane, it is good to see you in a school as well. That is probably pretty energizing. And I will say on this show, the hits keep on rolling. I’m loving all that our guests who have such different perspectives on the vantage point and the question around AI and education are bringing and I am very certain today will be no different.

Diane Tavenner: I couldn’t agree more, Michael. And as those interviews start to become public, we are now hearing from our listeners, which we love and honestly, it’s one of the best parts of doing this podcast, besides getting to have really fun conversations with you and geeking, I’m.

Michael Horn: I’m okay taking a backseat to the listeners.

Diane Tavenner: But I hope we keep hearing more questions and suggestions, especially at this time in the season when we start to think about what’s next. But before I get too far ahead of myself, we have a real treat here today. I think we do.

Michael Horn: Indeed. We have my friend Rebecca Winthrop on the show, and Rebecca is a senior fellow and director of the center for Universal Education at the Brookings Institution. Her research focuses on education globally. That’s how I got to know her most deeply. She pays a lot of attention to the skills that young people need to thrive in work, life and as constructive citizens. So really big, weighty questions. She’s also the co-author with Jenny Anderson, of a very highly acclaimed new book, the Disengaged Teen: Helping Kids Learn Better, Feel Better, and Live Better. Definitely check it out.

AI鈥檚 Impact on Education

Michael Horn: It’s obviously sort of a zeitgeist at this moment, sadly. And the book does a great job, I think, tackling it, helping people put in perspective and sort of think about where do I want my kid on these different journeys as they’re learning? And it’s not necessarily what you think the answer might be for those listening. So definitely check it out. For our purposes in this conversation, I will say not only does the book talk a lot about the the themes that we talk a lot about on this podcast, but Rebecca is also spearheading the Brookings Global Task Force on AI and Education, and we will link to that and the book in the show notes. But suffice to say, she’s been thinking a lot about the questions were most interested in, Diane. And I feel lucky we get to record with her because Rebecca has been like getting to hang out with like people like Drew Barrymore. And I think Hoda was at one of your book events, Rebecca, so you are rolling. The book has definitely hit a nerve.

Thank you so much for joining us. It’s great to see you.

Rebecca Winthrop: Oh, it’s a total pleasure to be here. It’s a treat for me, too.

Michael Horn: You can lie if you say that, given all the folks you’re getting to hang out with. But before we get into the approach of your thinking around AI and education and some of the questions that you’re asking, I would love to hear how and why you got interested in this topic in the first place and how you’ve gone about learning about, you know, AI in general and AI in education specifically.

Rebecca Winthrop: Maybe in reverse order how I’ve gone about learning about it. I mean the I think all of us, I would assume all of us, it certainly, maybe I shouldn’t make this assumption, are out trying stuff in our own lives. So I’ve gone about it. You know, when something new hits, I just want to check it out. So I’ve, you know, I’m now a steady user of GPT4, paying my little, you know, subscription. And it is so much better.

And I’ve tried the, you know, the, the dollies and the this and then that, like PowerPoints. Make an illustration. Do this. What can it do? Like, what can it do? Just, just because you get a little, it’s experiential learning, right? Like you get a little bit more of a sense of its power and its limitations. Well, maybe that’s just how I learn than just reading the text. So in terms of going about learning about it, the first thing I’ve done is just been playing around with it. And I’m no expert by any means, but it certainly has helped me wrap my head around the massive seismic shift, that generative AI is, I think that’s the thing that most.

And this gets to the first part of your question that I was most, you know, almost emotionally struck by was how crazy it is to be able to interact with a machine in my own words. Before we had to learn a different language. We had to learn code to interact and make machines do things. And now it’s in our own language. And that right there to me is a huge fundamental shift that we need to take incredibly seriously. And so then from there I started getting really interested in it because who can, who can not be interested in, if you’re in education and everyone’s talking about it. But also I started being really worried.

I was initially very worried about it because I just come out of all this book research Jenny and I had been doing for the Disengaged Teen. And the big highlight message there is kids are so deeply disengaged in school. And Diane, this has been your life’s work to find a new way of doing school that they’re not disengaged. So this is no new. And Michael, you have been on the forefront of how to use tech well for a long, long time. So I’ve, I’ve been learning from you for years. So it’s not news to both of you. But this book is a sort of broad audience book.

And we found there’s four modes of engagement that kids show up, they show up as passenger mode. Most kids we partnered with Transcend, 50% of kids, that’s kind of their experience in middle school and high school. Achiever mode. They’re like trying to be perfect at everything that’s put in front of them and end up actually being very fragile learners. Resistor mode. These are the quote unquote, you know, problem kids. That’s who we think is disengaged.

We broadly society and they’re avoiding and disrupting, but they have a lot of agency, a lot of gumption. And if you can switch their context, they can get into explorer mode. And the thing that I thought about, GPT3 launched in mid, sort of. Right. We were sort of towards the end of writing the book and I was so worried that it would massively scale how many kids were in passenger mode if we didn’t do it right, if we didn’t figure it out. And so that’s why we, you know, lots and lots of people are doing incredibly good work in different pockets around the globe. And anyways, that’s why we launched our Brookings Global Task Force on AI to try to bring those questions together and bring a different, slightly different methodology.

The Premortem Approach

Diane Tavenner: Rebecca that sort of leads into the first place I’d love for us to go, which is, you know, one of the ways that you approach this work is through premortems. And for, you know, people who don’t know what a premortem is, oftentimes we do post mortems after something to, you know, digest what, dissect what went wrong and what went right and whatnot. But the premortem is when you try to think about that before you’re even in it to really, you know, visualize and imagine the potential negative impacts that could materialize so we can do something about it before we get there. It’s conceptually a more empowering way of thinking about things. And so, you know, I, I’d love to unpack your sort of premortem thinking about this. And we’re going to start with the positive. So let talk us through, if you will, the positive case for AI in education. You know, as you’ve done this sort of premortem forward thinking.

What are the, what are you excited about? What’s the possibility? Right.

Rebecca Winthrop: Yeah, well, Diane, I will, I’ll get there on the positives, but I want to talk a little bit about the premortem piece because what you just did is exactly what everyone in education has done. When we started this premortem exercise because the premortem is you do not start with the positive, which actually has been a problem. The people in education, our people, all of us in our community are sunny optimists. We believe in the potential of human development. And every time we finally had to switch it up, like every time we did the proper premortem. There’s a whole science behind premortem thinking and starting with the risks. And people like rebelled.

They didn’t like it, they felt uncomfortable. So anyways, that’s an interesting observation but the idea of the premortem came out of sort of discussions we’ve been having internally. We had actually came out last, almost a year ago February. Last February we had a great meeting with our leadership council. We have a leadership council at our center and HP hosted us. We were in the Hewlett garage and it was amazing. And then we did a broader conference and we were just around the table trying to figure out how to wrap our hands around how different Gen AI is and what it means for education and knowing that there’s incredible conversations happening in a range of other pockets. And one of the things that I believe strongly in is that we should always look broadly across, not just a solution set can come from anywhere. 

And so even outside of our sector, from the health sector, in this case from cybersecurity. So this is a typical thing done in other sectors, cybersecurity being one. And we never, we can’t there, your listeners might know, but we can’t find a single instance where it’s done in education. And I actually think we should do it for every tech product before we roll it out. And it basically is, let’s figure out how it could all go wrong.

And then put that all on paper and then figure out how to mitigate those so it doesn’t all go wrong. And we should have, should have done this with social media 10 years ago. If we’d had child development folks, educators, teachers, therapists, counselors sitting around the table designing social media with developers, we, I’m sure, I am sure we could have avoided at least 70% of the harms. Now would companies have gone along with it? That a different, you know, question. Let’s parenthesize that like we, these are things that you can, if you go through a very systematic thought process and, and we have an incredible, Mary Burns is an incredible colleague working with us leading this where you, you literally, you know, it’s a very sort of systematic process to think about the risks. Yeah, you want to speed up and go straight to the benefits.

Diane Tavenner: Flip it. We don’t have to follow that. Like, let’s flip it. And so let’s start with that. Like, I mean the worst case scenario of a premortem is the patient dies.

Rebecca Winthrop: Right.

Diane Tavenner: And so like what, what’s the kind of patient dying of AI and education make that case for us and yeah, let’s do it in that order.

Rebecca Winthrop: Yeah, the premortem is like moving the autopsy forward and like, right. How could they die? So I want to caveat this and, you guys have thought about this deeply. So please chime in with your own versions that we are in the midst of the premortem research on the risks side, which includes lots of focus groups with educators, you know, with kids, with ED leaders, our steering group members, etc. So a few of the things. So this is going to be the Rebecca version. This is not the entire task force. A few of the things on the risks that give me pause are talking to, and we have, you know, a number of colleagues on our team who are learning scientists, neuroscientists, and then talking to other colleagues outside of Brookings who know sort of child development, no brain science, no brain development.

And as far as I can tell, we do not know. We royal, we, the people in child development, do not know what are the things that kids have to do on their own to develop critical thinking? You know, agency, key skills, and what could you offload to AI? And to me that is like, I actually am quite. I like just saying that I’m like, oh my God, I’m so nervous. Like, I’m really nervous. I’m nervous for my kids, I’m nervous for the students of the world because, you know, obviously Gen AI can do so much for us. So if one of the main ways that kids develop critical thinking through education at the moment, pretend is learning to write an essay with a thesis statement, picking evidence that supports their argument, putting it in logical order, and, and let’s be honest, like the what seventh graders produce as essays, it’s not a great contribution to humanity. It’s not the product of the essay.

Critical Thinking in the Age of AI 

Rebecca Winthrop: It’s the process that they have to go through to that logical thinking process, understanding what, how you parse truth from fiction. It’s as basic as that. Like where, what, where is data? What is evidence? How do you analyze it for arguments? So there may be another way to develop that critical thinking skill, but at the moment that’s sort of one of the main ways and until we replace, come up with another way that all kids can do it makes me very nervous that sort of Gen AI will sort of, kind of basically offload critical thinking development to our kids. That’s the thing I’m most worried about. And the second I’m most worried about is just, I mean we are at the tip of the iceberg with what this technology can do. And I’m, you know, I am sure we’re going to have all sorts of incredible things in the next seven years that we couldn’t even. That are like straight up Star Trek.

Right. With neural, you know, being able to talk to technology. We can already do that. Like and you know, robotic, you know, R2D2 type scenarios. And so I do worry about manipulation and I do worry about socialization, interpersonal socialization because we see what just a phone flat screen text message interaction does, but for kids, sort of ability to interact face to face. So those to me are the three things that I’m most worried about. But the first one is what makes me really worried.

Are you guys worried about that? Like how do you, how are you thinking about this?

Michael Horn: Oh, I love when you turn it back on us. We’re asking all you folks so we could develop a point of view on this. I think this, the quick answer for me is yes, I am nervous about it given the current way schooling is designed that we have not thought about how to mitigate it. Which maybe is my chance to turn it back to a question to you which is part of the premortem is identifying. And so all three of these risks I think are big. Manipulation is big socialization, we had an entire episode on that question and, and what do relationships look like in the future? Forget about schooling for a moment. Right. With AI bots.

Yeah. Right. And so I guess having identified those as three big ones.

What should we do to. You know like you’re starting to think about the. Yeah. What’s the mitigation piece? Right. Structurally, assignment wise. How do we think about this so that we don’t, you know, we don’t live right into those.

Rebecca Winthrop: Yeah, we haven’t gotten there yet in the task force. So this again.

Michael Horn: Yeah, just speculation.

Yeah, well, but, but let me sharpen the question actually Rebecca, because you just wrote this big book, right. Or I should say important book, the Disengaged Teen, where you thought a lot about the negative implications. Right. Of being in passenger mode and sort of the listlessness, which I think could be a byproduct of, of maybe all three of these. Certainly two of the three. And so how have you thought about that?

Rebecca Winthrop: Yeah, well, I think for me, the mitigation piece I’m going to take your question broadly, Michael. For me, I think we have to, I have like a sort of sequence of types of, levels of types of things we have to think about. So, like, for me, the biggest thing, and you guys have talked about this on your podcast, is really thinking through and being very clear when we’re talking about adult mediated use of particularly Gen AI, less predictive AI and student mediated or child mediated. And I mean that for right now, like, we’re in a massive point of transition. We will eventually come to some new normal eventually. But in our current sort of transition, the discourse around AI and education is so fuzzy and flimsy and unrigorous. You guys are great because you’re surfacing that.

And so often we hear, you know, AI can transform education. It’ll be great. And people reference. And I think, you know, it depends. And when people, certainly from technologists, you know, discourse, you know, it’s true that AI can transform many, many things. It’s unbelievable. Like protein folding, incredible. Spotting viruses in wastewater, amazing.

Like just rapid breakthroughs that are incredible. And all of those are run with by adults who have deep critical thinking and subject matter knowledge and are using the AI as a tool. And that’s very. And then the discourse goes. And then we’ll just give it to schools and it’ll be great and kids can blah, blah, blah. And it’s like, no, well, give it to schools who. So, like, let’s be very clear. Like, is it helping teachers massively teach better or is it helping them do the same more efficiently? Diane, this, you’ve made this point, you know, those are two different things.

And it’s very different from giving just sort of blanketing Gen AI in pedagogy for students to use. You know, given the example of the essay. Right. Like, it might actually, first of all, kids don’t have the content knowledge to understand. So I’ve spent my whole, you know, 20 years talking about the sort of academic skills plus. And now I’m like, oh, my God, let’s not forget about the content knowledge. Like, how will we know, how will kids know how to assess if this, the sniff test, does this seem right?

Michael Horn: Actually, can we put a pin on that just for one sec? Because that’s interesting. Like, you’ve been pushing us to be like, okay, not knowledge for just its own sake, but to do these skills and now you’re worried we might all sort of like sort of blow past it and forget that the knowledge actually is an important base. Is. Am I hearing you right?

Rebecca Winthrop: 100%. Like I’ve been absolutely pushing, which you know, you both have too, with the bringing together of knowledge acquisition with knowledge application. And I do think if we do it right, that’s to me the sunny possibility with Gen AI, maybe it could bring those two things closer together in a more scalable systematic education system wide effort. But I am very worried that people will be like, well, they’ll forget about the knowledge acquisition pieceand that is very scary.

Learning Systems

Diane Tavenner: Can we stay here for a minute? Because I keep asking people to think about the system and no, no one seems to want to go there with me. You’re the first person. So sorry, I can’t help myself. I’m so excited that someone wants to actually talk about a system and especially this space because, you know, I love this space. So you’re thinking that there’s this process of acquiring knowledge and like I think we’re aligned on this great knowledge for knowledge sake is not super useful if you don’t have skills like what are you doing with that knowledge? Are you analyzing? Are you, you know, making an argument? What do you. The skills you need to bring the. So tell, like paint me a picture of how AI might help bring those closer together in a learning system, if you will. Do you have any like, I mean, can you imagine that the.

Rebecca Winthrop: I’m not sure I have a clear vision at a classroom level, but I have a clearer vision at System Transformation Lever.

Diane Tavenner: Okay, okay, that’s great.

Michael Horn: Let’s go there.

Rebecca Winthrop: So one of the things that, you know, sort of in system transformation theory there’s the real sort of shifting of the purpose of a system which is the hardest. This is straight up Donella Meadows Systems transformation theory, which argues just maybe some of the listeners aren’t familiar, you know, that you know, there’s different levers to shift systems sustainably and you know, some of them are shifting how we measure things. Shifting how we allocate resources and those are all important and good, but we tend broadly people who shift systems, but certainly in education to get stuck there. Which means let’s shift our assessment, which is important. We need to do it. You know, let’s shift how we put money and you. It’s much harder to really shift a system that way than if you shift the shared vision and purpose of what a

Education is for. And so that’s a cultural shift. It’s a mindset shift. It includes you know and underneath that it includes shifts in power dynamics. So to me if, if the way in to me Gen AI provides an opportunity to do some be a lever to shift sort of the purpose of ed. So if, if ChatGPT and any other Gen AI tool can pass all the exams that we’re gatekeeping and systems for can do all the most of the assignments and if it can’t do it now it will do you know what I mean? Like it’s going so fast. Exactly. So then we have to, it will force us.

It is forcing us, which is part of the big discussion in this why we did this Brookings Task Force, to think deeply about what is the purpose of education. So we’re bringing, we have, I mean it’s a massive freaking logistical enterprise getting all kids in a jurisdiction to a place at the same time of day. Like that’s a, it’s, it’s just, it’s incredible what schools do logistically. Like so what are we. If you know, we might not. It might be hard to break that up until we have a different world of work because we, you know mainly schools are also doubling as childcare in every single country in the world. It’s the largest nationalized, you know, government supported child care system. So I’m not sure we’re gonna just kids roving around the world.

Reevaluating Education’s Purpose

Rebecca Winthrop: But if we have something we’re doing with kids at certain hours a day, what is the purpose of it? Like is it to identify a problem in their community and then start working backwards about what needs to be fixed, they need to fix it and try to learn the stuff. Here’s content knowledge that they may need that would inform them on how to fix it. And teachers are scaffolding and you know, curating problem solving expeditions and that’s the core thing of what we do. And you sort of learn knowledge and you’re using Gen AI as a dialogue agent. I mean I think Convigo is really interesting and I think it’s a useful use case of how to student. You know interfacing could be helpful for students but more does it free up teachers ability to teach differently? Because I don’t think we will get away from teachers nor do I think we should get away from teachers because human, the human connection piece is so crucial. So to me it’s really we, we cannot. It’s the deep thought about what’s the purpose of education now.

Like we can’t just keep going along, assigning the same tests and trying to ban cheating, you know, like, which is a short term, totally understandable emergency response because we don’t know what we’re doing and we haven’t got our hands around this. And boy, I wish, you know, tech companies would have given school districts a heads up, you know, like.

Diane Tavenner: Yeah maybe I’m not sure that that would have mattered. I must say, I do love what you’re saying. You know, years ago we created this whole experience for educators to go through. That was how do you create an aligned school model, sort of an elegant model. And literally, step one is to determine the purpose of education. So you’re speaking my language here. And it’s an interesting thought that this could be the lever that sort of forces us to rethink because the purposes it’s serving right now are so obviously met in some other way that we don’t have a choice. We have to revisit that. It’s a fascinating way to think about how it could drive system change.

Rebecca Winthrop: Just on that, Diane, Jenny and I, in our book, in the Disengaged Teen book, our meta argument around why engagement matters. And really we’re focused on, you know, explorer mode. We all need more time in explorer mode, which is agentic engagement, the marriage of agency and engagement. And our sort of big argument is it’s really time to move from an age of achievement to an age of agency in education. And we are seeing the age of achievement fraying. We’re seeing it in mastery, competency based, you know, College Board shifting up its, its, you know, ways of assessing new AP test versions. You know, we’re, we’re seeing it fraying and Gen AI, I think, just accelerate the fraying of the age of achievement, which is all about sort of, you know, content acquisition and synthesis and skills within that and sort of repetition back out. But really following instructions.

Diane Tavenner: Yeah. Talk for a moment about the benefit of an age of agency. What does that look like? Why is that a direction we would want to go? And how does maybe AI support it?

Rebecca Winthrop: Right. I think AI could it. I’m not sure where it. I think it could go either way at the moment. I think it really depends on how we use it. But when we talk about an age of agency, the piece that we are really leaning into is all the evidence around the marriage of, of basically agentic engagement, which, you know, Diane, Summit, you designed for agentic engagement. So this idea that when kids have agency over their learning and they have an opportunity to influence the flow of instruction in Little or big ways Summit is on the extreme. That’s a total redesign.

But you can do it in schools. Educators can do it in their classrooms by giving choice, by asking for feedback, by before starting a lecture, asking kids, where do you want to start? Do you have any questions about this topic? Like we’re doing the solar system, where do you want to start? You know, just that shifts the entire mindset of a learner. Right. Much more engaged. So A, they’re more engaged, B, they’re developing skills to really be able to independently chart their learning journey, which is what they’re absolutely going to need when they leave school. No one will be, you know, spoon feeding them. And we see that in the kids who knock it out of the park in the age of achievement. We found so many kids in our research who were excellent achievers in school and fell apart in college because no one is there, you know, spoon feeding them.

And so for us, and the other piece is they’re more engaged, they have, they’re getting sort of agency over, they’re learning much better skills and they’re much happier. It’s so much more fun to have some autonomy and ownership over your life and to try to be the author of your own life. And those are all the reasons why we think it is really imperative and that Gen AI has accelerated this need because, you know, more than ever now, kids are going to have to navigate this world where you’ve got Gen AI, you’re going to have advanced robotics, you’re going to have neural links, you’re going to have like, sooner we’re going to be, I’m sure, interacting with, you know, new robotic people. There’s a whole, it’s a, it’s a wild world that’s coming down the pike and our kids need to lead it rather than be led by it.

Diane Tavenner: That’s. Yeah, Michael, I feel like I’m hogging all the time. Do you have a question?

Michael Horn: Well maybe last question before we wrap up, Rebecca, which is so let’s say we have the purpose conversation. We, if not nationally, at least in strong pockets of communities, we commit to an age of agency and we start to think about what that is. Where does AI like what you know, you’ve been impressed by it in certain cases. So where do you see it perhaps what’s the positive case to be made for it in this rethought, purpose of schooling with a coherent design?

Rebecca Winthrop: I mean, I think the thing that I am most potentially optimistic about, and I know Diane, I think you disagree with me, but in the age of agency, I think if we’re rethinking the purpose, a huge barrier to that is teacher expertise, practice prep. And we’ve got a ton of teachers who’ve been trained in the age of agency and it is not their fault. They’re teaching their heart out and they’re doing their job. And you know, we’re very clear in the book that we, you know, this is not a problem with teachers. They’re squished from above with the system and squished from below, frankly, with parents sort of pressuring them. And so could Gen AI really unlock teacher ability to be experts in a new sort of let’s pretend the school is around solving problems? I think we need a huge piece of that solving problems, being around citizenship and civic in sort of personal, collective and community wide problems.

But I feel like that it could just, if done well, it could really be a massive boost for educators. So it isn’t so scary they’re not thrown into a whole new purpose of ed, a new, entirely new system with different, you know, ways of succeeding without some serious support.

Michael Horn: No, that’s super helpful. I like the vision in general. I’m taking from this conversation that whereas it’s kind of hard to have these national dialogues or dialogues even in communities around purposes, maybe AI is such an abrupt big shift that it actually brings us to the table to say, what the heck are we doing here? Because every single one of the stakeholders is like, this ain’t working. And so let’s talk about what are we actually trying to accomplish here? So maybe we’ll leave it there, Diane, and shift to the last part. Rebecca. We have this tradition that our readers enjoy. Yep. For better or worse.

They keep lists apparently of what Diane and I have read or watched. So. But we want to hear yours. What do you, you know, what have you, or what are you reading, watching, listening to, often outside your day job. But it’s okay if it intersects with it.

Rebecca Winthrop: Well, I have, Well, I don’t watch much, I must say, except for Shrinking, which I rushed through and through. Loved it, loved it, loved it. That was the best.

Michael Horn: Incredible.

Rebecca Winthrop: I can’t wait for like the next season. But I actually don’t watch a lot of stuff. But I do love to read. So I have two things here. One is Unwired Gaining Control over Addictive Technologies by Gaia Bernstein. She. It’s awesome.

She’s a lawyer at Seton hall and she. It’s a really good book and I’m not all the way done. And then the other one is a novel called Dust by Josh. Classy that just came out. It’s like a sci fi. It’s like a new Lord of the Rings.

Michael Horn: Oh, cool.

Rebecca Winthrop: Wow. Wow.

Michael Horn: All right. I like that.

Diane Tavenner: Yeah, I like that too. That’s fun. Well, I have, I have one this week. I was telling Michael, you know, he’s not the only sort of fan, fan, author, fanboy, fan girl. This week I met a woman named Samara Bay, and she has authored a book called Permission to Speak How to Change What Power Sounds Like Starting with you. She’s fascinating. And I got to have coffee with her last week and we did like a joint book club. We switched books and then got to sit down and talk about them.

I know, super, super fun. She’s got this incredible journey. She wanted to be an actor. She became a dialect coach. She worked with tons of famous people like Gal Gadot, et cetera, et cetera, and now has turned her passion of helping people to people who are really trying to bring impact to the world and drive impact in the world and helping them find their voices in public speaking. It’s which, you know, here’s the inside secret. It’s basically figuring out how to get out of your own way is really the secret to it. And so it’s a beautifully written book.

It’s also a super practical guide in many ways and so highly recommend it. Really enjoying it.

Michael Horn: Awesome. Awesome. Diane. I realized, like, I’m starting to outpace. Sorry, the podcast recordings are outpacing my ability to keep up with the reading and so forth. And like Rebecca, I’m not a huge TV person outside of sports and shrinking. So yes, there we go.

Yeah, but, so I, but I’m almost done with a book. Task versus Skills. Squaring the Circle of Work with Artificial Intelligence by Mark Stephen Ramos, he was the Chief Learning Officer at Cornerstone, is no longer there, but has been starting to do some writing and thinking about how AI changes our learning organizations or organizations where people need to be upskilling and reskilling. So far it has been interesting, deeply technical, and kind of enjoyed it. And I’m not at all getting out of work. So apologies on that, but no apologies for having Rebecca here. This has been fantastic.

Diane Tavenner: Thank you.

Michael Horn: Yeah, thank you so much for joining us. And a thank you again to all of you, our listeners. A reminder to check out Rebecca’s book with Jenny Anderson, the Disengaged Teen helping kids learn better, feel better and live better. Check it out, read it, digest it. We’ll have more conversations about it, I suspect. And let’s all stay curious together. We’ll see you next time on Class Disrupted.

]]>
Podcast: The Challenges AI Poses for Learning How to Write /article/podcast-the-challenges-ai-poses-for-learning-how-to-write/ Wed, 16 Apr 2025 16:30:00 +0000 /?post_type=article&p=1013741 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode, Diane Tavenner and Michael Horn delve into the role of AI in writing education with Jane Rosenzweig, director of the Harvard College Writing Center. Jane underscores the importance of writing as a process of thinking and warns against the 鈥渄eskilling鈥 of students because of an overreliance on AI. The conversation explores how AI may aid resource shortages in education, while also pondering if AI鈥檚 efficiency overshadows the importance of deep learning and authentic writing skills. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hi there. I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guest Jane Rosenzweig as part of our series exploring the potential impact of AI in education. This is where we’re interviewing optimists and skeptics, and I loved spending time with Jane, who’s a true expert in teaching writing. I keep thinking about a few of the key ideas from our conversation. One of them is, why do students even need to write anymore? Arguably, ChatGPT, Genesis, Claude, and the others are literally designed to write and likely a lot better than most people. So what’s the purpose of writing? Specifically teaching students to write. No wonder I’m still thinking about it, because as a former English teacher, this feels like a big existential question. The other one that’s sticking with me is we talked about this idea of AI optimism, arguing that, you know, this is a chicken in every pot. And honestly, who knew we’d be calling back to Herbert Hoover’s campaign slogans? But here we are at this very strange and interesting moment in time. And honestly, I can’t wait to unpack all this that we’re learning and talking about with Michael. But until then, I think you’ll really enjoy this conversation that we had with Jane. 

Hey, Michael.

Michael Horn: Hey, Diane. It is good to see you. And I’ve been reflecting between the conversations on how each episode that we’ve had in this series on AI has been so different. I’m just marveling, frankly, at all the different perspectives and viewpoints and levels at which people tackle our questions around AI and education in ways that I had not anticipated at all. And I’m also pretty certain today will be no different, which excites me.

Diane Tavenner: Michael. I totally agree. And, you know, when we first conceived of the series, we called it a miniseries, but as we’ve gotten into it, we keep thinking of more people who we want to interview because we are hearing such a fantastic range of perspectives. And I will admit, when you proposed our guests for today, I got super excited because my favorite thing to teach when I was teaching was writing. And it was my favorite because students made such meaningful and tangible progress. It was super rewarding as a teacher to be able to give feedback and, like, literally watch them grow, you know, in a matter of days. And that’s why I’m so excited to welcome Jane Rosenzweig to the show. Jane is the director of the Harvard College Writing center and a longtime expository writing instructor, and she’s been writing a lot about the impact of AI on writing in particular and what we may lose within this age of AI. And since 2023, she’s also taught a course called To What problem is ChatGPT the Solution? Super excited to dive into this conversation. Welcome, Jane..

Jane Rosenzweig: Thank you. It’s great to be here.

Michael Horn: Yeah. Well, we really appreciate you saying yes to the invite. And before we get into a series of questions around AI and education specifically, we’d love you to actually just start off sharing with the audience and with us, how did you decide or get pulled into this topic of AI so deeply itself? Like, what sparked you down this journey where you’re now contributing to the Boston Globe it seems constantly with, like, really interesting perspectives about how to think about these questions.

Jane Rosenzweig: Yeah. So like many good things that happen to all of us who teach, this began for me with a student. It was actually one of the writing center tutors. So about, I would say about a year and a half before ChatGPT was released, I had known that there were sort of ways, things in progress that were going to try to automate writing. And every now and then someone would call me and say, you know, do you want to work for this company that’s going to automate writing? And I would say, probably not. But I hadn’t really been diving into this. And one day I was in my office and I was looking at something called Jasper AI. I believe it was just one of the earlier AIs.

And one of the writing tutors was standing in my doorway chatting. And I said, hey, do you know anything about this? He was a. He’s a computer scientist. He was studying computer science. And he said, oh, not that one, but here’s what you need to know about. You need to know about the GPT playground. So right then and there, he came into my office and he showed me.

So what was the precursor to GPT3 was the GPT Playground. And it was a little bit different. You had to. It wasn’t a chat interface in the same way. It. To kind of figure out how to prompt it. And so I started playing around with that.

I started giving it my assignments, my writing assignments, just to see what it would do. I was trying to generate a paper about an article by Michael Sandel that my students were reading. And I just, I started to see, oh, yeah, this is. This is something, right? So I started thinking about it and this went on for a while. I was just kind of expecting, experimenting. I didn’t know that ChatGPT was on its way. About a week before ChatGPT was released, I published the first of my Boston Globe pieces.

Impact of AI on Writing

Jane Rosenzweig: It was called What We Lose When Machines Do the Writing. And it was all my musings of how I’d been trying to get GPT Playground to write this Michael Sandel paper, among other things. And then a week later, when ChatGPT came out, I was suddenly the person who knew more about this than a lot of my colleagues. Right. Because I spent all this time with it, and so suddenly I was, you know, I was an authority. Not, in a very small way, because I knew what it could do in terms of writing. And then when I published that piece, I sent it to a friend at the Berkman Klein center, and he invited me to come over to a conference they were having. And I just started becoming part of the conversation very quickly.

Yeah. And then it went from there. The rest is history, whatever they say.

Michael Horn: Wow. Wow. Well, so let’s zoom out then for a moment and before we get into the topic of, you know, these op Eds that you wrote for the Boston Globe, specifically, What Do We Lose When Machines Do the Writing as you just referenced, and I didn’t realize, I guess, mentally, that it appeared literally the week before Chat GPT came out. That’s unbelievable. But I would love you to make the positive argument for AI in education, even if it’s not your personal point of view, right? Sort of

What’s the best case that you’ve heard around where AI can enable us to do things for students that maybe we wouldn’t otherwise be able to do? Or what can it positively impact, even if you don’t necessarily buy into that viewpoint?

Jane Rosenzweig: Sure. So I should say my expertise is teaching writing. And these arguments about what AI can do in education certainly go way beyond what I’ve spent my career focusing on. So I think that’s important to note. I also think I’ve heard a number of these arguments, and they seem to be changing depending on what, you know, what the market seems to be interested in to a certain extent. So I’ll just. I’ll talk about what I talked about with my students in class today.

Today they had watched Saul Khan’s TED talk about how AI might save and not destroy education. And so we had a really interesting conversation in class. But I think of this as kind of the chicken in every pot argument. Right. So AI, the positive view of AI in education goes something like everything that every individual student needs can now be delivered by some kind of AI chatbot. So he talks about how there’s a shortage of Guidance counselors. AI can be your guidance counselor. You need extra help in math.

AI can be extra help in math. You need a teacher. AI can be a teacher. Oh, wait, you only need help, you know, generating some brainstorming. AI can be a brainstorming partner. So a kind of positive that it sounds like the people who are making that argument are saying, you know, the dream is it’s whatever we need it to be in whatever moment, we need it to appear.

Michael Horn: Superhero, as I hear you saying that. And so that makes me curious, like, you know, if that’s sort of the chicken in every pot argument, what parts of that do you in fact believe? Or maybe the better question is like, are there parts of it where you’re like, yeah, there’s facsimiles of that I think are right, but I would modify it in this way to make it, you know, yes, that could be a positive.

Jane Rosenzweig: So I’m skeptical of any argument. I mean, I teach academic writing. I teach academic argument. What I’m asking of my students all the time is for evidence to support their claims. I’m skeptical of any argument that goes so big without any accompanying evidence. So, I mean, there’s certainly evidence. You know, we see. There’s some really interesting pilots going on at Harvard, one in the Physics Department.

We’ve absolutely seen evidence that people think AI can be useful in small ways. But this chicken in every pot argument, I mean, the only. There is no evidence for this, as far as I can tell, that it’s really going to solve every single problem that everybody has. And yet that’s often the way this is being presented. Not just that this is a tool that might be able to help people learn a difficult kind of physics, which I buy, Right., I’ve looked at the.

The results of their little pilot. It seems very useful.

Michael Horn: That study was pretty interesting. Yeah.

Jane Rosenzweig: Yeah. So that’s not. That’s very different from the kind of AI can be a personalized tutor argument, which seems to lean heavily on the personalized without a lot of evidence for what that actually looks like. And so yeah, that’s where I would like to, you know, I would like to put everybody who’s making these claims through the kinds of assignments I put my students through. Okay, give me this arguable claim now. Show me the evidence. Show me the counter arguments. But

I don’t think we’re there yet. And I think the argument for the positive outcomes of this is way ahead of what we actually know about the technology at this point.

Skepticism on AI’s Impact

Diane Tavenner: Sort of classic Silicon Valley. We tend to oversell well in advance before we have any of the goods to actually prove it. It generally falls short of what was sold initially. Well, let’s go maybe in a direction that is closer to your expertise and that does sort of land more where you are, which is like, so let’s have you take the opposite now, the skeptical take. And like, specifically, what will AI hurt? And like, how. And although I’m sure you could make a steel man argument here as well, you’ve written a lot about how claims like AI is reducing friction is actually a concept counterproductive in learning, as in productive struggle is the point. And you’ve already started to get into this production of evidence and thinking. So, you know, tell us more from your real area of expertise, like, what is AI hurting and how is it hurting that. Yeah. What’s going on here?

Jane Rosenzweig: Sure. Well, so I, I mean, when, you know, when you teach writing. I’ve taught writing for 25 years. The first thing that comes to mind about ChatGPT and the initial conversations, right. We鈥檙e all about, well, if it can just generate the paper, why would anyone want to write the paper? Or why would anyone need to write the paper? So I think one of the interesting things that’s happened is that we’ve had a really productive conversation, not just at Harvard, across institutions, a public conversation about why we do what we do to begin with. And being able to articulate why I would want my students to write a paper even if a chatbot could just generate the paper has been, say, challenge number one. I mean, that was always a challenge, trying to help students see why there’s value in this thing that we’re doing. So, that’s been a challenge. I think one of the ways that I’ve put it that I always talk to my students about is that I’m not asking them to write a paper because I need a paper. Right. This is the product versus process argument. Right. We got plenty of papers. We don’t need any more.

So the idea that you would go out and, and generate your paper with A.I. sure. Then I’d get a paper that you wouldn’t have an experience. And so there’s something to the writing a paper in the way that I think I conceive of it is an experience in figuring out what you think about something. That’s what I find valuable. So AI challenges that just by its existence. Right.

You know, needing to understand why you would bother doing this thing and then we have the questions about, well, could AI help with this process in different ways? And I’ve had many conversations about this with a lot of people. And one of the things that people bring up a lot is, well, sure, I could just help students with the brainstorming and then they could write the paper themselves. But to me, if I say that this is all an act of thinking, this is something that I actually want you to figure out what you think. It’s hard for me to see the role of brainstorming with a chatbot in those moments when maybe the productive friction should be existing, where you should be trying to figure out between you and the text or you and the video of Saul Khan or whatever it is, what you actually think. So I have some concerns there as well. I think there’s, you know, we could, I could just go through every argument. There’s, well, could AI make an outline for you? Same questions arise.

Right. I don’t think an outline, you know, if once an outline’s written, then you’re exercising someone else’s vision. I doesn’t really work for me.

Diane Tavenner: I’d love to dig in on this one a little because I’ve been working with some young people who, by the time they’ve come to me, they’ve already tried to use ChatGPT to help brainstorm with them and help outline. And I have some thoughts and opinions about what I’m seeing that is, not only are they not doing the thinking, but the work that’s being produced is not very good. And it, it’s sort of obvious to me that it’s very. And it’s very chatgpt. And on the flip side, we’ve been talking to people and I would call them sort of very sophisticated kind of experts in their field and their areas of expertise. They’re having a lot of success using GPT. And so I’m wondering, is there something in between there were younger people who are kind of learning, who don’t have expertise, this is not as effective, and maybe it’s more effective.

For example, you as an expert in writing, it could be a very different tool. I wonder if you’ve noticed anything or picking up anything in that space.

Jane Rosenzweig: Yeah, so I think you’ve really hit on something that I’ve actually thought a lot about. I think that when we say AI is a tool, like people say, well, it’s just a tool and you could use it to enhance your writing. Well, generally when you use a tool, you what you’re doing. And so you know what you need the tool for. Right. If I have a nail, I know I need my hammer because I know that the nail has to go into the wall. One of the things that I’m worried about with AI is that you’re handing a tool to do the thing.

So that’s why, I mean, if you don’t know what you’re trying to create, then how would you know how to use the tool? Now, I can already hear all the potential counter arguments to that as well, but I think there’s something, something really solid there. Right. Like, I’m really worried about, in a sense, deskilling my students. I want them to know how to do the thing so that if they want to bring in the tool later, they can bring it in in a way that actually works for them. You know, that they know what they’re doing. Whereas if you, if you hand this tool to someone who doesn’t know what a solid argument looks like, doesn’t know what it means to connect with a particular kind of audience, the idea that they would say to ChatGPT, how do I connect with my writing instructor? You know, how what is she looking for? And it’s going to, you know, sort of draw on this predictive caricature of a writing instructor. They’re not going to learn what I want them to be learning.

Whereas, yeah, sure, I can use it. I don’t like using it. I don’t find it particularly helpful for my own work, but I could write a student paper with it that’s much better than what I’ve seen my students write with it at this point.

Incorporating AI in School Design

Diane Tavenner: Interesting. Yeah, that’s super fascinating. I’m curious about taking this. One of the things we like to do is just imagine, you know, if we could all wave our magic wand and design the schools that we want, what good parts of AI would we incorporate in that school design right now? Like, is there anything worth incorporating? And as you’re talking, I’m thinking about, and I’m curious about, my assumption is that when you’re teaching writing to your students, you have a vision of, like, how they’re actually using that skill when they leave the university. And, you know, I’m making up stories in my head right now, but I’m sure the folks that you’re teaching to write are going on to write, you know, extraordinary research papers, or maybe they’re even becoming journalists, or, you know, maybe they’re just becoming very effective at communicating their ideas in whatever role they’re in. So first of all, I should check and make sure that’s true and like, that’s how you think about the purpose of writing. But, like, what could you possibly do with AI that would enhance that? When we’ve talked about the things that’s going to take away is like, if they’re just trying to replace the learning that they would have to have in order to be good at those things later, I don’t know if that’s making sense, but hopefully you can make some sense of that.

Jane Rosenzweig: So. Well, so sure. I mean, my students are fabulous. They go on to do all kinds of interesting things. I think it’s really important. There are a lot of students who are studying STEM topics who are taking my AI based, my AI focused writing class, because they’re interested in this topic. They’re going to write. They don’t always know it yet, but they’re going to write grant proposals, they’re going to write.

They’re going to be the boss of people and ask for memos and write memos and all of these things. So I do think there’s something certainly instrumental in that way about preparing students for further writing. But I also like to think that when we’re talking about writing, I’m really trying to focus on how do you know what you think and why you think what you think? And this is not the only way. You can certainly have conversations, figure out what you think in many ways. But when you’re asking a question the way we do in academic writing, you’re asking a question and then you’re trying to examine the evidence and figure out what you think the answer is. And this is also a way of being in the world that I want my students to absorb, right? It’s not just so that they can write a memo at work. It’s so that they can look at things. They can look at a video or read a book outside of my class and bring that same kind of inquisitive mind to it.

So, again, those are things that I wouldn’t want to see outsourced, whereas later on, sure, they’re going to make a choice to outsource memos that they’re writing at work. I think it’s, you know, thinking about what you’re trying to do. I always say my class is called to what problem is Chat GPT the solution? And this has been a really helpful framing to me in so many ways, like, why are you using it? If you’re using it to solve a problem that it solves, then maybe it makes a lot of sense. But if you’re using to do something when there’s a different goal there, right. I want you to have an idea. I want you to have an opinion. Does ChatGPT help with that? And, you know, I think that’s. we’re less sure about that.

Recognizing the Importance of the Writing Process

Michael Horn: So, Jane, you’ve just actually clarified a few things in my head for me personally on my own writing process. One of them being outlines never mean anything to me. And I think the reason why is I don’t know what I think until I’ve written my way through the problem. So this has been on the couch for me. But. But the second thing I guess I’m curious about is you’ve essentially noted, right, that part of the reason for writing is to help people develop this muscle of how to clarify their thinking about, you know, whatever question is in front of them, right? Whatever they’re trying to figure out and what strikes me, right, it’s not about the performance or the end product. It’s about the. The process. So I guess this. The other thing, though, that I become curious about, and I think you’ve written this in, in some of your writing, is a lot of at least at the K12 level, a lot of the schools there are not making that purpose of writing either clear to students or themselves maybe grading around, you know, sort of judging around that purpose, right? Around the importance of the process and figuring out what you think, think as you wrestle with something through your writing.

I’m curious, are there places that are getting this right in your view? Like, do you. Do you know, K-12 schools that are doing this right? And if not, how do we start to move to that world?

Jane Rosenzweig: So I. Yeah, I think that ChatGPT, the release of ChatGPT actually created a really useful moment for us to be thinking about what we’re doing when we teach writing, when we assign writing to begin with? There’s always been a little bit of a disconnect that I’ve noticed between what my students were doing in high school and what I was asking them to do when they got to my classroom. That’s normal, right? We have a transition from high school to college writing. But I think one of the things that’s a real challenge is my students will tell me a lot that they learn to write in preparation for standardized tests, right? So there’s a particular kind of writing where you are not writing to discover something. You are writing to demonstrate that you know how to do this thing, which we sometimes call a five paragraph, essay, you know, how to sort of approximate a way of, of interacting, communicating, even if you’re not actually being told that you need to say something that matters to you or that’s of interest. So some of this, I mean, I understand this, I, you know, couldn’t begin to suggest what should be happening in K through 12 in terms of how we could move away from the standardized test model. But I do think that it’s difficult for students to see this thing that they’ve always done quite well, according to a kind of rubric of do you have a thesis statement and do you have three points in separate paragraphs and then someone like me comes in and says, right.

Do you actually believe that? What do you actually think? What about counterarguments? So counterargument is often the new piece that we introduce. Right. You can’t make an argument that is going to hold up if you can’t understand who might disagree with you. So interestingly, this is a place where ChatGPT was somewhat useful. In my class, I built a counter argument chatbot for my students. It was just a little pilot I teach a class about AI. I thought this would be entertaining. It forced them to go through a series of steps to answer a series of questions about their, their thesis for, for a paper they were writing and then to it, it wouldn’t tell them though.

It wouldn’t give them any answers. It was just asking them the questions. And so they actually found it kind of frustrating. And this, I think, is an interesting point about how we think about AI. They didn’t all find it frustrating, but the ones who said they found it frustrating said they were expecting an efficiency tool, you know. Right. They are used to thinking that AI, that ChatGPT is going to save them steps. But I had spent ages trying to make this chatbot behave more like I would behave, which was, you know, to just keep asking them and keep saying is, you know, is, you know, what about this? What about this? And then I told it to give them some potential counterarguments, but they didn’t have to be correct. Right. Because I wanted them to have to engage. And so I do think there are moments where something like that might be helpful. But I think that it’s really, it’s kind of doing battle with the perception that AI is supposed to save you time. And what I want is for certain things to take as much time as they need to take. And I do, I think that that kind of chat bot, my students, one of them said he’d rather just talk to me about it if it’s going to take him half an hour anyway. But there may be some interesting scalable ways to do something like that. But this brings me to sort of where I always end up when I think, oh, but maybe there was some interesting things to do.

But these are my concerns. I think those of us who are in the classroom are very aware that there’s a big difference between the way an AI tool could be used and the way it’s likely to be used. And I think if we don’t admit that and kind of grapple with it, then we’re kidding ourselves. Right. Students, you know, students need to see the value in what they’re doing in order to want to do it. That’s the great thing that’s come of this conversation about ChatGPT. A lot more of us trying to articulate what the value is of what we’re asking students to do. But they have a lot of competing demands.

And so in a moment, you know, are you going to spend half an hour? Are you going to ask the bot. I think this is. We just need to be realistic about this.

Diane Tavenner: So cool. Such. I think we should end it there because such an important point. And I’m loving just the reflections I’m having already. So thank you for prompting those. We have this fun tradition, Jane, where we end each episode by sharing one thing that we’re reading or watching or listening to. We try to make it outside of our day jobs if possible.

And so we’d love to invite you to share something to add to our list, recommended or not.

Jane Rosenzweig: Okay, so I have been watching a TV show called Palm Royale on Apple tv. I don’t know if you know about this. It’s. Kristen Wiig plays a 1969 wannabe socialite in Palm Beach, Florida. And I think why, you know, it’s not. It’s not high art, but why I’ve been really enjoying it is because it’s not taking place now. It’s taking place in 1969.

Diane Tavenner: That is awesome.

Jane Rosenzweig: I think we all need to take a vacation to a different time now and then.

Diane Tavenner: Well, well, speaking of different times, I think this one might surprise Michael, given what this is not a thing I. I’m normally reading. I’m not a big canon person, but believe it or not, I’m re. I’m actually listening to the Odyssey and I mentioned on some of our other episodes that we’re headed to Greece in a couple of months and so I’m diving into nonfiction and fiction, you know, related to this trip to Greece. It’s been a long time since I have visited this poem and this time around I’m listening to a translation by Emily Wilson, a narrated by Claire Danes. And it’s funny, I wish I would have talked to you about this Jane, because you might have some thoughts about this. But I think for well, the translation who’s translating may such a huge difference obviously if you know anything about translation.

And I am in this moment in time where I’m feeling like we need actually more female energy, at least in Silicon Valley, in my corner of the world. And so I am loving the extensive explanations about the choices that the translator has made and how they contrast with so many of the historical translations. It’s just, it’s fascinating and beautiful and so I’m surprising myself and really enjoying it.

Michael Horn: I love that you picked that as someone who took so many classics my first year and my, sounds like you’re listening to my college classmate because Claire Danes and I were the same class at Yale, so there’s that. And Ryan Holiday who I just heard speak was talking about why the Odyssey still resonates centuries later, even with his six or seven year old kid, just the other day, something I was listening to which so that’s super fun. I confess I am just finished a book that falls back into our work lives. So I apologize. It’s a book published by the Harvard Education Press titled Who Needs College Anymore? Imagining a Future Where Degrees Won’t Matter by Kathleen deLaski of the Education Design Lab. I will say it provoked a lot of different thoughts for me, mostly informed I think by my growing view that we need to think a lot about how we give young people the opportunity to have real work experiences when they are students and see value of what they’re doing. And it actually connects in an interesting way to what you said, Jane, around making sure that the purpose of things as well is in the foreground as opposed to in the background for learners.

And that’s how I’m connecting with the book first and foremost. So I will leave it at that for the moment. I suspect, Diane, we will have deeper conversations on that at some point, but for now I’ll just say a huge thank you to Jane. This has been a fantastic conversation, has opened my perspective on a number of things and of course thank you to all of you, our listeners. We will see you next time on Class Disrupted.

]]>
Podcast: How AI Is Changing How Young People Connect /article/podcast-how-ai-is-changing-how-young-people-connect/ Wed, 26 Mar 2025 20:01:00 +0000 /?post_type=article&p=1012554 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

On this episode, Diane and Michael welcome guest Julia Freeland Fisher, the director of education research from the Clayton Christensen Institute. The conversation explores the potential and challenges AI presents in the educational landscape. Julia shares her insights on the importance of using AI to enhance personalized learning experiences and facilitate real-world connections for students. She also voices her concerns about AI’s impact on human connection, emphasizing the risk of AI replacing genuine interpersonal relationships. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guest Julia Freeland Fisher as part of our series exploring the potential impact of AI in education. This is where we’re interviewing optimists and skeptics, and I really enjoy talking with Julia and keep thinking about a few key ideas from the conversation. First, Julia’s expertise related to social networks gives her a really important perspective on AI and the potential for it to either harm or help with social networking, which is such a critical factor in career and life opportunities for young people. She was really compelling in talking about how real experiences matter, and I think you’re going to enjoy listening to her talk about how using AI to create what she calls infrastructure in digital experiences could enable young people to build social networks. Infrastructure is in contrast to sort of chatbots or agents, which are a really different experience. The conversation caused me to deeply reflect on my own social network, how I created it, and how I use it, and how complex it is. And at the same time, I’m thinking a lot about a handful of young people I know and what their social network is currently, and how AI may or may not be interrupting them building the social networks that they need and will depend on in the future. And then I’m also thinking about that for me and my age and stage, and what does that mean? It’s been a fascinating rabbit hole that I’m really hopeful will yield some positive impacts on the product I’m building in the future, and how my behaviors as the leader of a company sort of evolve and respond to this moment in time. All of that to say, I truly cannot wait to thoroughly think through all of these ideas with Michael, but until then, I think you’ll really enjoy this conversation we had with Julia.

Diane Tavenner: Hey, Michael. 

Michael Horn: Hey, Diane. Good to see you. 

Diane Tavenner: You too. So, Michael, since we started this miniseries on AI and have begun interviewing all these really interesting people, I’ve started to notice AI literally everywhere in my life. And so I remembered something about this from my psychology days, and I had a conversation with yes, GPT about this to try to make sense of what’s going on. And it turns out there is a particular psychological phenomenon that is going on here. I think I’m going to pronounce this correctly. It’s the Baader Meinhof phenomenon. It’s also known as the frequency illusion.

And basically what happens is when you learn something new, you think about it, you focus on it, and then you start noticing it everywhere. And this is the result of two cognitive biases. So the first one is selective attention bias, where your brain is now sort of primed to notice the thing you’ve been thinking about, so you pay more attention to it in your environment. And then the second is confirmation bias. You know, once you notice it repeatedly, you interpret this as evidence that it’s suddenly more common, even though its frequency hasn’t actually changed. I don’t know about you, I find this fascinating how our brain sort of filters and amplifies information based on what we’re focused on. And so, yeah, that’s happening to me. Just to illustrate my confirmation bias.

I’m actually going to say out loud that I do think AI is everywhere. And I’m betting the person we are about to talk to today might feel the same because a lot of her recent work is on AI

Michael Horn: Well, I think you have nailed the lead in Diane. That’s a perfect segue as well for today’s guest, who I suspect is nodding along, excited to have her Julia Freeland Fisher. I’ll say up front, I’m really excited about this one because Julia has been a longtime colleague and friend of mine. I hired her at the Christensen Institute as a research fellow. And then when I left full time a decade ago, just about a decade ago, she stepped in as my successor. It was like a version 2.5 or 3.0 or something like that. We jumped ahead several generations.

So it was terrific. And I couldn’t be more thrilled, frankly, about the work that she’s done since because she’s really elevated the important topic of social capital into the education conversation. She’s frankly, taught me a ton along the way. The book that if you want to sort of catch up on it that she wrote a few years ago is Who You Know. But most recently, she published some really interesting research about AI in education titled Navigation and Guidance in the Age of AI, which we’ll link to in the show notes. But I’m sure we’re going to get into that and much more. But first, Julie, before we do that, just welcome to Class Disrupted. Great to see you.

Julia Freeland Fisher: Thank you. So honored to be here with both of you.

Michael Horn: Well, we hope you’ll still feel that way by the end, but yeah, but before we get into a series of questions we have for you, actually, let’s table set a little bit and share with the audience. How did you get so deep into this topic of AI itself? Because, as we said, you’ve been researching social capital for several years now in education. You’ve thought a lot about the role of technology in that equation, clearly. And you thought a lot about how schools perhaps should redesign themselves to become more permeable, if you will, to the outside world. But why AI and what’s been the scope of your research around it?

Reimagining EdTech for Human Connectivity

Julia Freeland Fisher: Yeah, absolutely. So I think, you know, historically I was sort of obsessed with the concept of, and I’m putting this in air quotes, edtech that connects. I’ve been really disheartened, but still optimistic that there’s a long runway of innovation if we were to start to think about education technology, not just in service of content delivery or productivity or assessment, but also in service of connection, that young people could overcome the boundaries of their existing networks, they could connect with peers and protect professionals that shared their interests, that there’s just so much possibility if we started to do in the classroom what many of us do in our working lives, using technology to connect across time and space. So I’ve been studying that for a long time, and it has been a small but mighty market, certainly not something that has grown significantly and that has made me painfully aware of just how much the ed tech market ignores connection as part of the value proposition of school. And so enter AI, and we’ll get into this more. But you know, for all of its sort of fantastic productivity, upside and intelligence, the piece of AI that I’ve been paying attention to is the tendency to anthropomorphize it and to make it human-like, to make it capable of mimicking human emotion and empathy and conversation. Because what I see unfolding, and this is not inevitable, it just has to do with how the market absorbs it is a true possibility of disrupting human connection as we know it because we don’t value it to the level the market ought to.

And because the technology has suddenly taken this dramatic turn towards human-like behavior, affect, tone, etc. So I’m just fascinated by that. And I want those of us inside of education, I want parents to be awake to this kind of dimension of the technology that like wasn’t really, it was maybe lurking, but it wasn’t really dominant in the edtech sort of old days, the sort of version one of ed tech where we weren’t giving these tools the same sort of voice and emotion that I’m seeing now. So that’s a little bit of it. But I want to, you know, at various conferences I’ve been labeled a pessimist and a doomer. I really want to come to this conversation as a realist. Like I’m, I work for the Clayton Christensen Institute for Disruptive Innovation. I am not anti-technology. I am worried about the market conditions inside of which the technology is evolving.

Diane Tavenner: Well, Julia, I’m so glad we started there to like just ground everyone in the work you’re doing and how you think about it. And I’m going to give you your moment to sort of be the realist. Let’s start with just inviting you to sort of make the steel man argument in favor of AI in education. Like in your mind, what’s the best case possible scenario for AI in education from your perspective, given your work, you know, as a mother even, you know, like, what’s the best possible outcome we could reach?

Rethinking Personalized Learning Potential

Julia Freeland Fisher: Yeah, I want to first just describe how surreal it is to have Michael B. Horn and Diane Tavener asking me that question. Like I’m chatting with two luminaries that I’ve learned so much from in thinking about the potential of tech to really personalize learning. And I know that term gets overused and now it’s maybe out of fashion, but like it’s a little absurd that I would be providing an answer to you on that. But here I go. Anyways, I think, just quickly, I think the potential to scale a system of personalized content, experiences and support and thinking about those three things actually as kind of separate strands or value propositions being key. The adaptive content and assessment piece may be the most obvious, the most familiar sort of evolution on top of how we’ve talked about ed tech in the past. But I’m actually probably equally or more excited about the possibility of seamless infrastructure to support a mastery based system that also gets students connected to new people and learning in the real world.

And it’s infrastructure doing that. It’s not the AI talking to the student that’s doing that. And I’m not sure how much investment we’re seeing that you guys may know more than I do, but that’s kind of my vision of what the more time I spend with the tech, the more I see how much that could actually be feasible in a way that even 10 years ago, I think we all had sort of dreams of that. But the tech was a little bit clunky and was, you know, it could create a pathway. But the idea of flexible pathways that actually were adaptive in real world contexts felt a little more out of reach.

Diane Tavenner: So let’s stay here for just a minute, Julia, because I want to make sure people really understand what you’re saying by infrastructure. We’ve had dialogue around and by the way, I’m working on this, you know, I’m working on this. Got one person in your corner. We’re getting closer and closer, but like, we’ve had a bunch of conversations about sort of chat bots or agents or things like that. And when you’re talking infrastructure, that’s kind of in contrast to the experience that I think most people are having right now. So just illuminate that a little bit for us. Like make it, make it. So everyone can visualize what you mean.

Julia Freeland Fisher: Yeah. Let me name two pieces of infrastructure, one of which I know Michael has featured in some of his work, and then another of which I’m not sure if you guys have talked about. So one is a tool called Protopia. It’s used in higher education founders Max Leisten, I believe. And the tool that Max has built, you know, he partners with alumni engagement offices. And the way the tool works is students can go onto their career services website, ask a question, and based on the content of the question, Max’s tool will call through the alumni directory of that school, find the alum who is best suited to answer the question, and email them directly to their email. There’s not a clunky app that you have to go through and if they answer that student’s question, fine. If not, it will go to the next best alum to answer the question.

So that’s infrastructure. It’s behind the scenes, it’s facilitating an opportunity for learning. And in this case, obviously I’m highlighting it because it’s facilitating connection as well. But it’s sort of doing the behind the scenes manual work that is not like high quality human work, but is necessary if you want a system where students are moving beyond just a singular predetermined path and actually having opportunities or conversations beyond it. The other one I want to highlight that I actually think is illustrative of why this is exciting and also why I’m like a little bit getting labeled. The doomer is a tool called Project LEO that spun out of Da Vinci schools in Los Angeles. And it’s designed to create bespoke individualized projects aligned with the principles of project based learning based on students’ interests in they’re like ikigai, which is that Japanese Venn diagram thing.

What was so exciting in the initial version of this tool is that they were then not only did students get a personalized project aligned to their interests, that aligned also to the teacher’s sort of core content that they were trying to hit on, but that it would also connect them to a working professional who would give feedback on their project. Now, as they’ve rolled out the product, the demand or the willingness to pay for that last feature has been quite limited. So it’s not currently sort of part of the main product. And I say that to say like infrastructure for project based learning, that’s exciting to me. Right. It’s been perennially hard to scale project based learning that’s interest based. Diane, this is like again absurd for me to explain, explain this to you, but that’s really exciting, right, that it doesn’t sit on a teacher’s desk to have to create 25 unique projects.

I would like to though see the market mature in a way where demand for that last mile connection out to the real world is also there and people are willing to pay a premium for real world experience. So those are just two examples of like it’s the behind the scenes creation of stuff that students then do. It’s not necessarily a student facing adaptive tool, which I’m not totally down on. Like I think there’s a place for that. But that’s the infrastructure conversation.

Diane Tavenner: Super helpful.

AI: Pessimistic and Realistic Concerns

Michael Horn: Yeah, yeah. So Julie, if you’ve painted that picture of what could be and frankly a layer of AI that’s much more invisible, I think facilitating these sort of interactions, experiences, connections and so forth, I’d love you to take now the flip side. And you said you’ve been labeled a pessimist, so maybe it’s. I was gonna say give us the skeptical take, natural side, maybe it’s the realistic take. But, let me ask it in this way a little bit more directed because I. We want this part of the conversation which is what do you fear that AI is going to hurt and how and although I’m sure you could also offer like a real, you know, sort of steel man argument here as well. I think that your research has a lot to say around what you’re seeing and what implications that might make mean that we ought to be wary or at least on guard about right now.

Julia Freeland Fisher: Yeah. So there’s, there’s two things I want to name here, and one of them that I could go on and on about, which is human connection. So I’m going to let me say the first one briefly, which is I’m worried about it harming the concept of experiential learning. And then we’ll get to human connection. The concept of experiential learning is so exciting to me. It’s what I want for my kids. It’s what I want for all kids. And as much as I think that I just described two examples of infrastructure that could get us there, I think the market is much bigger for simulated experiences than actual experiences.

And I think a lot of the hype around AI is like, these bots can simulate anything. They can be anyone. You can be pretending to talk to fill in the blank. And yes, that may be a context to develop skills in a more applied way, but it’s not real experience. And I’m worried about that for two reasons. One, I think that you run the risk of young people becoming accustomed to sort of synthetic interaction. But two, because if you look at what employers are demanding of entry level work, it is experience, it’s not just skills. And Ryan Craig has written a lot about this, the experience gap. 

As AI actually chips away at entry level work, Higher ed needs to step in and actually prepare students in new ways. But the piece of that I think we’re not paying attention to in the education conversation is that that actually requires true experiential learning, not just simulated skills, not sort of performance tasks. And at least from what I’m seeing, and Diane, I’m right where you are at the beginning of the episode of like, I’m just reading all of this stuff through my little doomer lens now. But I just think there’s so much more hype, partly because employers are willing to pay for like, simulation experience stuff in the L and D market. There’s much more hype around simulation than around, what would it take to scale true experiential learning, which, by which I mean learning skills in an applied context with other humans. Yeah, so that’s my number one.

But now that was like, not my real rant. My real rant is, I actually think, Michael, that’s something you probably thought more about than, than I have.

Michael Horn: So yeah, let’s hear number two then.

Threat to Human Connection

Julia Freeland Fisher: Okay, so number two, what I think it could hurt is human connection. And I want to put this in a context of what I said initially around bots being anthropomorphized. And this is happening across many different pockets of both the consumer and ed tech market. I think we should be way more worried about the consumer applications. So we’re talking here about romantic companion apps like Replica, character AI where people in general and young people included are being drawn into parasocial relationships with bots that emulate and can even exceed sort of human behaviors in meeting those users emotional needs. That is emerging against the backdrop of a long standing loneliness epidemic, which is a lagging indicator of our underinvestment in human connection and inside of schools, it’s emerging against the backdrop of what I have observed over the past decade of my research of a lot of sentiment about relationship, but very little strategy, very few metrics guiding whether students are actually connected, very little budget dedicated to human connection again, as a value proposition in its own right. And so it’s really, and Michael taught me this, right, Michael taught me disruptive innovation theory.

It is a classic disruption story in that loneliness is providing a foothold in the market for these bots to take hold. And there is very little stopping their upward march in the market. There is very little to hinder their growth because we as a society have basically said go get less lonely on your own, like go solve this loneliness thing by yourself. Which is ironic at best and really dangerous at worst. So that’s my big concern again, I don’t think ed tech is going to be the straw that breaks the camel’s back. Like if we asked over the last 20 years what technology most affected young people’s lives, like, I’m sure some of our colleagues would like to be like Khan Academy, but I think many of us would agree, like, no, it was commercial tech.

Michael Horn: Yeah, sure, yeah. In particular. Well, so let’s stay on that because I think you’ve raised two very interesting challenges and the consumer. I mean, we also know from schools right now that frankly what plays off in the consumer space impacts how engaged teens are and so forth.

AI’s Impact on Human Learning

Michael Horn: In the school experience as well. So I think something that has been on both of Diane and my mind’s around the AI conversation is what AI hurts of that, like what will still be relevant, if you will, in the future. Right. And how much is this about replacing outdated structures? I’M going to guess that you think real human relationships and social capital and the like will still be important in the future. I’m hoping you’re going to tell me that, but I guess I’d love you to play with this theme a little bit and get a little bit more nuanced, like, so take the experiential learning piece, right? If we’re offering simulations as entry level to get someone information of, hey, is this something you want to explore more as an entry point to then get something different, you know, is that a bad thing? Or like, where’s the slippery slope? And where is it really chipping away at something that’s fundamentally what makes us human and that we ought to really be concerned about handing over to AI.

Julia Freeland Fisher: Yeah, totally. I mean, I think let’s look at the upsides real quick, both on the experiential and the human connection front. Like on the experiential, these simulations are a way to scale practice, which we know, again, we use the shorthand of skills, but it’s actually we should always be talking about skills and practice. And so I don’t want to claim that like simulated practice is a bad thing. It’s a great cross training for like developing skills. I think I just worry that the market is so blunt that it treats that as the outcome of interest versus applied skills plus human connections. On the human connection front, you know, I’ve been looking at the navigation guidance space and there’s really two stories emerging. On the one hand, we have the potential to disrupt the social capital advantage that has perpetuated opportunity gaps by giving students from all sorts of backgrounds access to resources, information and guidance that otherwise often travels through inherited networks.

So that’s huge, right? Like, democratizing access to information and advice is not something that we should devalue in some sentimental name of like preserving human connection. The piece of it, the slippery slope though, right is that what I found in my research, at least based on our interviews with the supply side, is that the demand side really treats navigation and guidance as an information gap, not a connection gap. And we know that an estimated half of jobs and internships come through personal connections. So if you just use AI to solve the information gap piece, you’re not doing the last mile work of actually addressing opportunity gaps. You’re improving, you’re sort of. It’s like a rising tide lifts all boats, but the gaps are still going to be there if you don’t get the social connection piece right.

So that’s where I’m very wary of these like self help bots that, you know, tout democratizing access and opportunity but are actually sending the wrong message to young people about just how social the opportunity equation in America is.

Diane Tavenner: Yeah. Oh, I could not agree more. Literally. Okay, let’s, let’s take a little bit of a turn here, Julia, you probably can guess this if you don’t know it. One of the things I do for fun in my spare time is imagine the designs of new schools that I would be excited about teaching in or my child would be excited to go to. And so let’s go there for a minute. Like if you had a magic wand, you could design the school to look any way you wanted to, presumably using this new technology we have.

What parts of AI could you take advantage of and you know, what would you avoid because it’s not going to work well. And like what would that actually look like in a school?

Julia Freeland Fisher: Yeah, again, maybe I’ll stick with the relationship theme partly because I’m like Diane, you just tell me your answer and I’ll copy it as like the school designer in this conversation. And there’s a lot of people in the field who I trust more to sort of think about the like whole school design. But when I think about like how do we design a deeply connected school experience for young people in the age of AI? I think there’s three kind of main things I’m looking at. One is, and most of them are infrastructure, just to be clear. One is infrastructure to support high touch webs of support for each and every kid. So this is very clear in the youth development literature that young people don’t just need one caring adult. Even though for some reason that term like, like people grabbed onto it and has stuck.

Young people need webs of support and they are most effective when the people in those webs are connected to one another. This is research from John Zaff and Shannon Varga at BU. It’s informed really great models like City Connects and Bar, but those are expensive to run and the data systems to actually make them highly responsive and even predictive of what a young person needs just like don’t really exist. So that’s number one, high touch webs of support. The second though is more diversified networks aligned with students’ interests. And that’s what we found in our own evaluations of particularly career connected learning efforts at the high school level that are trying to expand students’ options. Young people were least likely to report that they were connected to people who shared their interests. And so I think there’s a ton of opportunity there again to like use AI to detect young people’s interests to,

Conversations and Confidence in Networking

Julia Freeland Fisher: Michael, to your point, to do some front end exploration of like future possible selves. Diane, I know you’re thinking a ton about this, but then to build the middleware so that you are starting to have conversations with people who share those interests. And maybe the best unit to think about there is conversations, not relationships. These don’t have to be long lasting connections necessarily. But how is the high school experience a constant stream of conversations with other humans? And then lastly, you know, I, I do think that the one place I’m interested in these self help bots and I know I’m giving them that sort of derisive term and it’s on purpose, I think we need to be wary of them. But I am really interested in something we see time and again when it comes to building and deepening and diversifying young people’s networks is confidence is really the moderating variable that you can teach young people communication skills. You can do these kind of surface level, here’s how to write a professional email. But confidence makes or breaks whether they go out and mobilize networks on their own, whether they even start having new types of conversations with people they already know.

And I do think that’s like a little wedge in the system where these self help bots could make a difference. A couple providers playing in that space now climb together, Kindred, Backers. These are all sort of startups that I think are keying into like what if AI could de-risk help seeking or reaching out, which for an adolescent can be like so daunting. So those are a couple thoughts of like those being in the background. So that high school, and I’m thinking mostly of high school is like an inherently networked experience. It’s not just if you are outgoing or wear your ambitions on your sleeve or do an independent study, but for every student.

Diane Tavenner: Yeah, that, that’s so fascinating. You know, quick just personal anecdote here, I’m stunned at how reluctant sort of the younger generation is to ever make a phone call. Literally they don’t call people. It’s not a thing. And you know, my son worked on the campaign, the presidential campaign and he had a quota of 175 phone calls a day. And he actually thinks, and I agree with him, this is one of his greatest skill sets now like month after month doing that, like that ability to just talk to people is so missing in our world right now in that generation. So that really resonates with me.

Let’s do one more, if you’re okay, I’d love to zoom out because I know given the work that you do, you’re influencing people, how they’re thinking about policy and procedure and, you know, all of those things, like, what’s on your mind in this moment in time? What are you telling people that they should be looking at, thinking about, you know, wary of promoting in terms of policy, procedure, and, you know, you pick the level, whatever.

Julia Freeland Fisher: Yeah, well, I’ll riff on your last point about your son to answer that initially, Diane, which is something that came out in our research time and again. And this was talking to founders like yourself, but who are incredibly thoughtful about the design of their products and services. And time and again, and you were not one of them, Diane, because you are not pro chatbot, at least in what you’re currently building. But time and again, folks would bring up, and again, this is in the guidance advising space. You know, sometimes students would rather tell a chatbot something than a human. And it’s a safe space and it’s a place for sort of less, there’s less risk involved.

Exploring Student Reliance on Chatbots

Julia Freeland Fisher: And I came away from that research being like, is that a feature or a bug? Like, how are we internalizing the fact that students don’t want to talk to humans? And what is that a reflection of? And so I think that’s number one. Like, what I hope at this, like, sort of ecosystem level people start thinking about is like, if students want to be talking to chatbots like that, let’s actually interrogate that a little bit more. I think the second piece is around really starting to come up with language and some markers of what I’m calling pro social technology. So again, I don’t think AI is inevitably going to disrupt human connection. But I think if bots are not trained to nudge students into the real world offline, if bots are actually trained to keep students engaged, if consumer tech, right, is making money on engagement, that is all moving in an antisocial direction. And I just think we need more language around that because, like, I was in a, like, off the record chat with someone from a, one of the big who recently left one of the big AI companies. And, you know, everyone’s worried about like, national security and China and things that I know I should also be worried about while I’m like, lying awake about AI companions.

But, you know, I said to him, like, what about the fact that these are being anthropomorphized and like, encroaching on what we sort of hold dear as human. He was like, yeah, everyone working in industry is, like, creeped out by that, but has no idea what to do about it. And that was revealing, right, that there’s a real prisoner’s dilemma here. That, like, there’s a creep factor. But it’s like bullet seven on slide four. Like, no one’s really as worried about it as I think we should be. So that’s number two. And then the last thing is really much more parent facing.

Like, I think whether you agree with the, like, moral panic, Jonathan Haidt stuff around cell phones over the past year, he’s tapped into parent anxiety that I’m like, this is the right anxiety in some ways around screen time and addiction. But, like, we’re not even talking about what’s coming. And, you know, if you think social media was designed to appeal to our deeply wired need to connect, AI companions, are that on steroids and so I am not myself, like, a parent organizer, that’s like, not. I wish that was, like, who I was born to be. But I’m hoping that there will be more conversations around parent organizing around just like, not creating barriers to innovation. This is the tightrope we need to walk right, like, not shutting down the tech, but being super aware that, like, we have seen this movie before.

Michael Horn: Yeah.

Julia Freeland Fisher: So those are my big three.

Diane Tavenner: Well, I got carried away there, Michael. Any other questions you want to ask before I take.

Michael Horn: I think we asked the right questions. This been fascinating.

Diane Tavenner: Okay, good. Yeah, I couldn’t help myself. I so appreciate your thoughts, Julia. And we’re going to ask you for one more. So we always invite our guests to join in our sort of end of show ritual, which is where we share what we’re reading, listening to, watching. You know, we try to do it outside of work, but we often, you know, regress back into to work. But we’d love to hear what’s been up for you lately?

Julia Freeland Fisher: Yeah, so I just finished this, like, breathtakingly beautiful book called Nobody Will Tell You This But Me by Bess Kalb. It’s a memoir about her grandmother, and it’s done really beautifully. It’s like her grandmother is talking to her. Like, the form she chose is just stunning. And yeah, it was just intergenerational connection is, like, one of the most beautiful things. It was beautifully done. And I was actually thinking about it when I was.

And then I’ll stop talking, I promise. But I was listening to your guys’s last episode on AI and you were talking about Notebook LM. And like putting a chapter of a book into that and just how much texture of like the brilliance of what she did would be lost listening to these, like, TED Talk adjacent fake voices, like, riffing on it. And like, our kids deserve to live in nuance and to detect it. And like, how do we. Anyways, that book in particular is just such a beautiful, like only a human could have written it. And I know all sorts of people in Silicon Valley will debate me on that, but. Highly recommend.

Diane Tavenner: Yeah, for sure. I love that recommendation. I’m working on planning a dinner called Generations Over Dinner and so that might be a fun.

Julia Freeland Fisher: Oh, my gosh, check it out it’s beautiful

Diane Tavenner: So I might add that in there. I will. Okay, what’s up for me right now? Well, I’m gonna stick with my biases that I introduced at the top of the show and say that we just finished the second season of the Foundation, which is a series on. I forget one of those. I don’t know. It’s on something. Anyway, based on the writings of Isaac Asimov, you can tell how good I am at tv. Not very.

And yes, one of the big plot lines is all about AI. There’s no doubt about it. And so I’m seeing that literally everywhere. I will say it’s for me, having not read the books, unlike my kiddos, it’s a little bit hard. It’s a lot going on there. It’s hard to follow. I don’t remember everything. I was glad I had some guides, human, actual guides, sort of coaching me through it, and it came together for me at the end and felt worthwhile. So it’s certainly beautifully done and well acted and. And all of that. How about you, Michael?

Michael Horn: This may be my entree, Diane, into it. Because I’ve struggled with the books. Sal Khan has actually tried a couple occasions and I just I cannot get into them. So I like that. I will also stay with biases, but on a totally different front. I feel like I’m going to stereotype myself here or everyone listening is going to be like, yep, that’s Michael. So I just recently finished the Master: The Long Run and Beautiful Game of Roger Federer by Christopher Clary. My tennis fandom, I think, continually comes out recently on this podcast. So beautifully organized book. Really enjoyed it. I will say I’m like, there’s Rafael Nadal people. They’re Roger Federer people. I’m a Nadal, Pete Sampras sort of vintage person. But I was really glad I read the book, gained a deeper appreciation of Federer, and frankly, actually picked up some tips that I wish I had known much earlier in my professional career from practices that he would employ at the tournaments that he would show up at with everyone around the tournament, not actually the playing itself, which was not something I expected. So we can offline about that later. But it’s all about relationships, it turns out.

Diane Tavenner: So you have me curious now. I wasn’t expecting to be curious afterwards.

Michael Horn: But it’s all about relationships. It’s comes back to Julia’s thesis. And with that, a thank you, Julia, for joining us and taking us through this fascinating conversation that we’re going to be reflecting on for a while. I know. And thank you to all of you, our listeners. And we’ll see you next time on Class Disrupted.

]]>
OpenAI鈥檚 Education Leader on AI鈥檚 鈥楳assive Productivity Boost鈥 for Schools, Teachers /article/openais-education-leader-on-ais-massive-productivity-boost-for-schools-teachers/ Wed, 12 Mar 2025 16:30:00 +0000 /?post_type=article&p=1011387 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

In this episode of Class Disrupted, Michael and Diane chat with Siya Raj Purohit, who works on education initiatives at OpenAI, about the transformative potential of AI in education. Siya shares her career journey and how it led her to focus on bridging the gap between education and workforce development. Highlighting the immense value of AI tools like ChatGPT, particularly in university settings, she underscores its potential to personalize learning, reduce teacher burnout and enhance classroom interactions. Siya also addresses concerns around AI by emphasizing that, while AI can elevate thinking and productivity, the irreplaceable human element in teaching 鈥 such as mentorship and personal inspiration 鈥 remains vital.

Listen to the episode below. A full transcript follows.

Michael Horn: Hi there, Michael Horn here. What you are about to hear is a conversation that Diane and I recorded with Siya Raj Purohit from OpenAI as part of our series exploring the potential impact of AI on education from the good to the bad.

Now, here are two things that grabbed me about this episode.

First, I was struck by how much Siya uses ChatGPT in her daily workflow already. Yes, she works at OpenAI, but it has seemingly revolutionized her life. As she said, it’s a massive productivity tool. From using it as a tutor to helping her figure out what projects to prioritize, what to learn, this is just part of how she works now. 

Second, I was struck by how much she’s really on the ground level with universities, particularly professors, helping them figure out how to make it part of their workflow as well for teaching and learning, and how deep she is in specific use cases as a result, and how she sees this, frankly, as an important tool to free up teacher time, elevate student thinking, and the like.

As the conversation wrapped up, I’ve also been reflecting on a couple things.

First, what would it take for ChatGPT to be a massive productivity tool for me personally? And if that’s the framing, what does it mean this technology can and can’t be used for in education?

I was also struck by how OpenAI has decided to go deep on supporting those in college and beyond with their tool, but they haven’t yet created their own products or services for students who are under 18. Candidly, that’s not something I had really realized or reflected on before this conversation. So I’m excited to reflect a lot more with Diane after we talk to a number of people about this topic. But for now, we’d love to hear your thoughts about this conversation. Please share it with us over social media or through my website, michaelbhorn.com. And with that as prelude, I hope you enjoy this conversation on Class Disrupted.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. Good to see you.

Diane Tavenner: I confess I am really excited about today’s conversation because the first two we’ve had about AI have been super interesting and have been raising some big questions for me around the assumptions that I had coming into these conversations and AI and schools, and in particular how we organize schools themselves around new technologies. But it’s made me even more curious to talk to other people and get other perspectives. So I’m really, really looking forward to talking today.

Michael Horn: As am I, Diane. And I. I agree that the first two episodes have piqued my attention on different things, and I’m looking forward to digging in on more at some point. But whereas our last episode featured someone who is, I think it’s fair to say, largely skeptical about AI, I suspect we will get a very different take today, given our guest actually works on Education at OpenAI, the company that of course developed and operates ChatGPT. Her name is Siya Raj Parohit, and she has been focused on supporting ed tech and workforce development in the startup community and at AWS over the past decade before she more recently joined OpenAI to work specifically on education. We’re going to get to hear about all that up front. But first, Siya, welcome.

Michael Horn: It is so good to have you.

Siya Raj Purohit: Thank you so much for having me.

Michael Horn: Yeah, you bet. So before we get into a series of questions, questions starting to dissect AI and its impact, or not on education, I would just love you to share with the audience a little bit about how you got so deep into AI around the question of education, perhaps specifically, and maybe you’ll also humor me as you do so, because I’m curious OpenAI’s interest in all this because it seems like more than maybe any product launch other than the iPad that I can remember anyway, I can’t think of any other consumer tech product or service that has made education such a cornerstone of all of its announcements and sort of promise and potential for the new technology. So maybe you can tell us a little bit both about your journey, but also how OpenAI sees education.

Siya Raj Purohit: Absolutely. So I’ve spent my career at the intersection of education, technology and workforce development. This all started when I was 18. During college, I published a book about America’s job skills gap, talking about how American universities weren’t teaching the skills that students needed to land jobs in industry. This stemmed from my own experiences and the fear that I may not be able to land the jobs that I aspire to. And that’s something that I think a lot of young adults relate to. But I’ve spent the next 10 years from that just trying to help bridge that gap. I worked at early stage startups, venture capital funds, and most recently Amazon, trying to bridge that gap between learning and opportunity, helping make economic mobility more possible for different types of learners.

Siya Raj Purohit: I joined OpenAI about 8 months ago to help build up our education vertical. As you all might remember. November 2022, ChatGPT launched and suddenly became like such a used product around the world. And what was interesting for OpenAI is that learning and teaching was one of the most common use cases on why people were engaging with ChatGPT. So this year we launched a product called ChatGPT EDU is designed for universities and school districts to be able to use an enterprise grade version of ChatGPT. With that, it brings all sorts of different types of benefits. There are all sorts of network effects that can exist on a campus once all students, faculty and staff have licenses.

Siya Raj Purohit: I will share a couple of examples of what that looks like. But a big part of my job is to help education leaders, educators and students start using AI more effectively on different types of campuses.

Michael Horn: Perfect. Perfect. Go ahead, Diane.

Diane Tavenner: Yeah, I mean, I think it sounds like rightfully so. Michael and I are both operating under the assumption that you’re probably biased towards seeing AI as something that offers real opportunity to improve and transform education. And clearly your personal pathway and journey is leading you to that impact. And so one of the things we’re interested in is having you sort of make the best case for how AI will impact education in a positive way. And we have a lot of things in our minds that we’ve thought about, but we’re really curious to be expanded in our thinking and have you make that very best case for us.

ChatGPT: Revolutionizing Personalized Learning

Siya Raj Purohit: So I believe for education as a sector, personalized learning was always the holy grail. We always said that if we achieve that, we have made it, like we have accomplished a lot of education goals with that. And I think that with ChatGPT, it exists. I have a personalized tutor that I talk to every day. It knows my projects, the skills I’m developing, like my aspirations. And it helps me become a better knowledge worker every day. And I think that in education, it’s making high quality tutoring available to anyone with an Internet connection and supporting educators by automating a lot of the time consuming jobs that they do to let them focus on what matters a lot for them, which is like mentoring and inspiring students.

Diane Tavenner: That’s interesting. Let’s stick on that one for a moment because, and we’ll get to this a little bit later, but like I wonder, does that mean that the schools don’t actually end up changing very much because the tutor and the sort of automated assistant just allow students and teachers to do things the way that they have been doing them, just better and more efficiently? I’m curious what you think about that.

Siya Raj Purohit: So right now the most interesting examples we’re seeing is that educators accrediting ChatGPT for reducing teacher burnout, which as you both know is a big problem in America. Teachers who used to spend so much time doing lesson planning, quiz grading, like all the preparation for classroom activities are able to outsource a lot of that work or kind of use ChatGPT, do a lot of that work. And so then they can focus on those classroom interactions and the engagement within different peers in the classroom, which I think is much more valuable. As far as the classroom dynamics go, I think that it is a big compliment in the way that it brings personalized support and tutoring to individuals. But at the same time I do think that there’s still value in students being grouped with others that are of the same age as them because then you develop a lot of social skills and you learn how to interact more. So I’m not of like mind that like people should just do online school and have ChatGPT because I think that social component is becoming increasingly more important.

Diane Tavenner: Got it. I’m thinking back to your 18-year-old self who wrote a book, which we could spend a lot of time just even talking about that, but having we’ve both written books, we know what it takes. We weren’t writing them at age 18, I don’t think. And your whole premise there that like I’m not learning the skills that I’m going to need to be successful in the jobs that I want to have or the careers I want to have. How do you see AI and what you’re doing with ChatGPT contributing, you know, making that not true or improving that. What is the intersection there of your personal sort of passion?

From Personal Struggle to System Change

Siya Raj Purohit: The reason I wrote that book and I felt so passionately about that and I guess that passion still, like it’s so deep in me is because at first I thought it was a Siya problem. Like Siya was not able to be learning engineering skills to be able to land a job that she wanted. And then I did enough research by speaking with some really accomplished individuals to then realize this was actually a system problem. And the book was like my attempt to capture like the scale of this problem and also prove to myself that this is not just like the thing that I’m struggling with. And then I think the next part of that was like, how can I free other people from the struggle? And that’s when like, this journey to try to make economic mobility more accessible has become like my life passion. So I think with ChatGPT one thing that it does really phenomenally, which I hope the students will take advantage of, is it helps elevate our thinking. A lot of times I share my thoughts on a project and I’m like, how can I elevate my thinking? How would a COO of a rocket ship company approach this? And it helps kind of expand my thought process much more.

Siya Raj Purohit: And I think while doing that, it helps us feel like less alone in a lot of these things that we encounter a lot of the problems because we can find the right examples, we can think bigger about this, we can find our own gaps. And I think these things are very powerful.

Diane Tavenner: Yeah. One of the things that’s interesting about talking to you that I’m observing is when we ask other people to make the best case scenario for AI, it’s a little bit detached from them. But what I hear in you is literally this is what you’re doing. This is how you’re working every day. It sounds like you are a true believer. Am I missing anything or am I hearing that right?

Siya Raj Purohit: I used to work really hard at AWS, but I accomplish about three times more every day at OpenAI just because I have AI now. I use it a lot to up level myself, but also to uplevel the project outcomes I provide.

Diane Tavenner: Interesting. Awesome. Well, this next question might be more challenging for you.

Michael Horn: It’s a massive productivity tool for you. And I’m interested in your book. There’s this common theme, right? You used “me search”, as we would say, not just research around your book. And then you were doing the same thing with this tool because you’re living it in terms of your massive productivity boost. But I guess I’m curious, like the flip side of some of these things because I, you know, there’s a lot of skeptics, as you know about, oh AI might not even just like not have these transformational impacts, but also might undermine certain things. And so I’m sort of curious where you come out on some of this stuff. And I’ll Just name two. And then you can go wherever you want on it.

Michael Horn: Which is one, you said in some ways it actually makes you feel like you have a companion alongside of you to elevate your thinking. Some people said that actually could be dangerous because maybe you’ll be in isolation. Right. And not feel like you have to connect with others. And then you talked about elevating thinking. And I think that’s the other big worry that people have is that it’ll actually do the thinking for you. Right. And we won’t do the difficult, effortful work to learn about how to construct an argument and, you know, critical thinking and build knowledge so that we can analyze it and so forth and so on.

Michael Horn: And I’m just sort of curious, like I kind of want you to steel man the argument and make the best skeptics take, but I almost more want you just to start to dig into these different use cases, you’ve heard the ones that I just named and others and sort of talk us through how you think about them.

Human Connection in Education

Siya Raj Purohit: Yeah. So let’s first talk about the human connection piece. It’s really interesting because a lot of educators come talk to me about their own doubts and concerns about the future of their profession. They’re like, will I still like be a teacher or educator given that ChatGPT exists and it’s getting so good? And this question honestly surprises me a lot because the reason that I remember educators that have influenced my journey is because of who they were and how they made me feel and who they told me I could become. Right. These are things that ChatGPT doesn’t do, because ChatGPT and AI know about me what I tell it, right? But great mentors can see things about me that I don’t even know about myself. And I think that’s a really important distinction. And I think that educators have this really unique opportunity in this era to double down on those things, they got into teaching to mentor and inspire and find these connections.

Siya Raj Purohit: And now they have the opportunity to do more of that because if they can help increase the potential or vision for more people, that’s the true power of education. I’m really excited about that. And I don’t think that ChatGPT will replace human relationships. I think it’s just gonna become like a support system. So like, the reason, like how I use ChatGPT on my personal career front is that I tell it like the things that I might want to become, like, this is like my 5 year goal, this is my 10 year goal. Can you create a really robust roadmap on how I can get there. And it gives me really, like, precise instructions as I join these types of organizations, publish this type of content, think about taking on these types of projects at work. It’s really detailed.

Siya Raj Purohit: But what it misses out on is, like, when my manager comes in and goes like, hey, this is your superpower. You should double down on this. You know, like, forget, like these type of strategic projects. They just hone in on what makes Siya, Siya. Right. And that’s what we need more people to do for other people.

Michael Horn: Super interesting talk about the other part of this. The you mentioned elevating thinking, giving you a personal roadmap. It’s amazing. Again, the other fear that I hear a lot of is people say, well, it’s actually going to cause people to not do the effortful work to actually learn or even get to the questions that you’re able to ask of it. How do you think about that concern?

Siya Raj Purohit: I think educators need to show more about what an extraordinary outcome looks like. And we need to just be able to showcase what amazing end products look like in different verticals and different domains. And the reason for that is that if you give a generic input to ChatGPT, you’ll get a very generic output, which a lot of students are realizing, because they’re just like, okay, I’m going to plug in my homework, get a very generic output, submit that. And that’s not what professors are looking for. So I think one of the most creative use cases I’ve seen is a professor at the Wharton School. He always had an essay as a final submission for his MBA class. And he says, he’s like, what is the value of an essay? The value of an essay is not necessarily in its output, but in the conversational skills and critical thinking skills that go into getting to that output. So now he requires the students use ChatGPT.

Siya Raj Purohit: He’s like, they are going to use it anyway, might as well make it a requirement. And now he measures the number of prompts they use to get to an essay that they’re really satisfied with. Some students are so good at prompt engineering that they take like two or three prompts and they have a really good essay. And some students go back like 18 or 19 times to get to a good essay. And he uses that as their ability to clearly articulate what they’re looking for, which he thinks is a really important skill. So if he can teach students how to communicate those skills, like in terms of communicating that output that they want to see, and also be able to visualize some really extraordinary output, then they’re going to be able to use AI as just a tool to get there.

Michael Horn: So maybe this is the last question in this section that I have because building off that, I think it’s almost an implied set of knowledge and awareness, right, that students need to have as baseline to be able to have those expectations or hopes for outcomes and things of that nature. I’m sort of curious, you also mentioned that what the purpose of an essay is implicit in all of that is that some of the artifacts that we have used historically to gauge, you know, thinking processes and argumentation, et cetera, et cetera, like they might change in the future. Right. The example we’ve used a few times at this point is Brorr Saxberg, one of our friends likes to say Aristotle worried deeply that the written word would mean people didn’t memorize Homeric epic length poems anymore. And he was absolutely true.

Michael Horn: And I don’t think any of us regret that. And so I’m sort of curious, your take of like, you know, sort of how we do work or the artifacts of what we think of as representing learning, how might those change even in the future? And maybe some of these concerns, they won’t all be that relevant because we will show our knowledge and skill development through other means.

Siya Raj Purohit: So I think a lot of like basic calculations, basic strategic work, all of that is going to become much less important. I think a lot of listeners would probably relate when their teachers told them they wouldn’t always have a calculator around, so they needed to learn basic math early. And now we do. So it’s just like these kind of like, the basic elements of strategic thinking, I think are gonna be less important than they used to be. But the things that are going to be more important is like, like critical thinking, but also emotional reasoning and the ability, like emotional intelligence to be able to these outputs and make sure that they match the type of Persona that you’re serving. So right now in my current role, I do a lot of like, I guess, partnerships and BD work and those kind of things. And like, yes, I use AI to create the different types of documents and slides and those kind of like assets that we share. But the way that I communicate them to the end user to kind of inspire confidence or interest is like the unique ingredient here.

Siya Raj Purohit: And we need to be able to teach that. So when the strategic work, as our reasoning models get smarter and do more of that strategic work, that human element helps people distinguish their work and stand out.

Diane Tavenner: Interesting I’m so curious because I think you maybe more than other people have started to maybe personally see some changes happening in schools because of AI and like how it looks different and how it feels different and/or I bet you can imagine them a little bit better than a lot of people. And one of the things that I think we suffer from is just imagination in this space, right? Like we all know what school looks like and we have a really hard time breaking out and imagining something different. So can you just take us there? Like what could possibly look different, feel different for a teacher, for a student in a school? What are you seeing? What are you predicting?

AI Revolutionizing University Experience

Siya Raj Purohit: For this one, I’m going to actually focus more on the university setting because that’s where we’re seeing the fastest changes happen. Our current thinking around what an AI native university looks like is that every campus will have multiple AI touch points across that help enhance the student/faculty/staff experience on campus. So basically the idea is that we’re going to take the knowledge of the campus, make it conversational and more accessible to these users. So when students come on campus, they’re going to have these orientation GPTs which, where they can ask questions like where’s the best pizza place in town? Or how do I change my roommate? Or any of these kind of preterm questions that they have. Then they’re going to come into classrooms where professors will have designed these custom GPTs that are just basically that have learned from the professor’s material and help answer questions. So a professor at HBS, Jeffrey Buskyang, was telling me that most of his class uses custom GPTs between 12am and 3am when like a human tutor is not available. And they can ask questions like which CEOs handle layoffs well and get the exact examples to help understand these kind of concepts. So classroom conversations will become much more in depth because of this.

Siya Raj Purohit: But also students will be able to do things like I have a statistics exam coming up, can you give me some practice quiz questions that relate to the same like level as my professor provides and just be able to go back and forth in classroom content that way. They’ll go to career services where they’ll be able to use the university’s proprietary data to practice interviewing with a McKinsey partner and McKinsey recruiter, all with like AI. So like all of these experiences will happen, student clubs, career services, classrooms, and it’s going to happen seamlessly for students. So they’ll be able to navigate between this very easily as they try to like grow as students and professionals.

Diane Tavenner: Super helpful I want to dig a little bit more and this might be surprising to you, but I actually think a number of people who listen to our podcast, maybe fewer that listen to our podcast, but sort of in education, have literally never even used ChatGPT yet. They haven’t logged into it. So let’s spend just a moment helping them picture what it means to have a GPT. Is it on their phone? Is it on a computer, Is it on a kiosk? What does it literally look like if I’m a student when I’m engaging? And what makes it seamless?

Siya Raj Purohit: I saw a meme recently which I thought was really funny in Harry Potter and the Chamber of Secrets. Harry starts writing in this diary and it’s like Tom Riddle responding at the other side. But I really liked that example because your first experience of ChatGPT feels similar to that. You just start writing. It’s a blank screen and you have a conversation and it converses back with you. And it’s actually a very magical feeling because you’re able to have conversations with the super intelligence that exists outside of our brains, which is very powerful. So I think that it’s really important to be able to first start having this conversation. You can use chat.com, you can use your mobile app, you can start actually on WhatsApp now or even call in.

Siya Raj Purohit: There’s a 1-800 ChatGPT number. So any of these mediums that make sense for you, you can start and you can ask basic questions. What we see most people do is start with very basic questions and kind of start building up as they gain more confidence in the back and forth interactions of this and then they’re able to do more and more complicated jobs. So how we think about transformation for organizations is the very first step is at an individual level. So when individuals start writing emails better, they start doing better, like project planning or activity building. Then it shifts up to the department level. That’s when people start collaborating together on different projects. One of the best examples I saw of this is that a school district told me it takes 40 people several weeks to assign which class goes into which room on campus.

Siya Raj Purohit: And now ChatGPT can do that in a few minutes. So hugely empowering at the department level. And then finally get to that organization wide level, which is when you’ll have so many different AI touch points and make that experience much easier as you navigate different levels of knowledge on campuses.

Diane Tavenner: I think the other thing that you’re saying that I’m not sure everyone will pick up unless we call it out. So I’m going to ask you to call it out is the reason, this is not like going to be a generic GPT. The intersection with the campus is that you’re actually taking the data and the information and the expertise of the campus and well, you’ll tell me the right words, but like mixing it with the power of GPT to make it sort of a customer customized experience. Did I get that right? What does that look like? What’s going on there?

Siya Raj Purohit: So basically there’s ChatGPT, which is accessible to everyone. Everyone will have slightly different experiences as they go through it, but it’s basically a knowledge base and a conversational platform. Custom GPTs are specific instances of ChatGPT which are basically trained to do very specific tasks. So a professor can be like, this is my six months of curriculum. This is all the case studies I provide. Just reference these when answering all student questions. So now that super intelligence is focused. So it doesn’t like look at the web, it doesn’t research answers, it focuses on the six months of curriculum, goes very deep and helps students be able to learn from that more effectively.

Siya Raj Purohit: And you can use these custom GPT instances for any type of knowledge base. One of my favorite examples of this is that a professor at the University of Maryland told me that they created a custom GPT of themselves. They uploaded about 24, 25 pieces of research work that they’ve done. And like there are different pieces of writing and now they talk to what they call Virtual Dave and get good ideas on what their next research project should be. So it’s like having a thought partner which is only limited to a finite amount of information that you share, but it’s super intelligent itself.

Diane Tavenner: Interesting. And let’s just stay here for one more quick beat because you’re leading us into what, maybe the work looks like for the teacher or the professor, but like just get a little bit more concrete. So that professor literally like copied and pasted his stuff into GPT? Tell it, tell us a little bit about what that, what’s his work now? What’s he doing?

Siya Raj Purohit: Yeah, so it takes about 15 minutes to build a custom GPT. You upload PDFs or documents and so you don’t need to copy/paste and you give it instructions. Again, this is where the assistant piece comes in. You explain to the custom GPT what his job is. So in this case, this professor is like, you are going to be my virtual thought partner. As I think about my next research papers. As I think about my next book or my LinkedIn posts, I need you to sound the same as I have in my career so far. So maintain the same tone and professionalism, but help me ideate on what the next iterations of these projects can look like and give me like very honest feedback.

Siya Raj Purohit: So these are the instructions it gave and then the professor just has conversations with it. It’s just like, could I go in this direction? And custom GPT is like, no, it’s a little bit like overdone. Why don’t we look at this path and it just becomes a good like research assistant for you.

Diane Tavenner: Awesome. Michael, here’s the jobs to be done at the moment, I think.

Michael Horn: Seriously, right. What we’re going to flag that for coming back to Diane?

Diane Tavenner: For sure. So let’s now bring in. I promise we will stop really soon as soon that we’re getting to the end here. But I know that OpenAI you think a lot about, you talk a lot about, you focus a lot on policy and you’re engaging with the policy, you know, field and whatnot. You know, what are you learning about the intersection of education policy and policy around AI? Like what, what should we be looking at, looking for, watching out for, paying attention to from your perspective as educators, as people who are leading schools and school systems and universities, you know, what, what do you see coming? What’s important, what should we be thinking about?

Siya Raj Purohit: So right now universities are in a couple of different groups when they’re thinking about AI policy. Some have like very established guidelines and clarity in terms of where AI plays the role in their student journey. So like, I think some of the most forward thinking education leaders that I’m working with are like, okay, like AI is accessible. The cat is out of the bag, it’s going to happen. And now I need to think about how I change my curriculum at the university to be able to use AI and help students prepare for the future. The best examples of this is Harvard Business School, there’s a professor named Jake Cook who teaches a digital marketing course and he’s mapped out what a digital marketing marketer’s journey looks like now in the profession and the seven different jobs that a digital marketer does and where does AI enable each of those jobs? And he’s turned all of his projects,

AI Integration in Education Evolution

Siya Raj Purohit: So now you use AI to do competitive research, AI to create marketing assets and images, AI to help you with the copy and website and all of these kind of elements of what he thinks the students will graduate into the workforce and need to know, and like policies that enable this kind of forward thinking nature are really helpful for students because then they go into Enterprise and have ChatGPT Enterprise and actually are able to use that effectively. And then there are other institutions that I think are still trying to figure it out. They’re concerned about how it might change their former assignments, how they can’t use the same kind of syllabus they might have used in the past years. And a big part of our job right now is to help kind of showcase these examples of the forward thinking institutions and help these other universities learn, kind of grow their own thought process. At the end of the day, universities are the ones best suited to make these decisions for their students because they understand them the best. And it’s so interesting because when you like speak with a state school, you realize they care a lot about like navigation of tools and being able to help students find the right information on a campus that is 50-60,000 students whereas a small liberal arts schools are just like, how can I help the student be able to voice their opinion more effectively? And all of these things have AI solutions. But it’s universities that need to kind of figure out what they want to become and how AI can help with that.

Diane Tavenner: Interesting. I could ask 27 more questions, but I’m going to ask Michael to rein me in and either wrap up with something something or

Michael Horn: No, I think this is super helpful, Siya. I guess my last question is you’re clearly spending a lot of time with colleges and universities. Are there others in the OpenAI team? Are you spending similar amounts of time with K12 institutions or how do you think that’s going to evolve over time? Because clearly it seems like the colleges and universities are, not all as you just said, but many of them are wrestling with this yesterday. Are you seeing similar movement among K12 schools and districts or not? In which case that also tells us something.

Siya Raj Purohit: They have a growing number of K12 customers. But the big caveat is we don’t have an under 18 product right now. So it’s not for students, it’s for like teachers and staff members in K12.

Michael Horn: Gotcha. Okay, super helpful. All right, well let’s maybe wrap up there. Something we love to do, Siya, though, before we let our guests go, is to wonder what else you’re reading or watching or listening to outside of your day jobs. And so maybe ChatGPT has recommended you reading lists or watching lists. But I’m just sort of curious, one thing outside that maybe you could point us to.

Siya Raj Purohit: It’s interesting to say that I’ve actually been asking ChatGPT a lot for book recommendations because I think it’s very magical when you find the right book at the right stage of your life. And I want to see if ChatGPT can help make that happen more often. It’s mixed results so far.

Michael Horn: Okay.

Siya Raj Purohit: One book that I’m reading right now which is super fascinating, it’s called Say It Well, it’s written by one of President Obama’s former speechwriters, and he intertwines, like, how to be a good public speaker with stories from President Obama. And it’s just super fascinating to read about how, like, things that President Obama slipped on in different talks, which make him much more human and accessible, but also like the ways that he thought about providing great speeches and connecting with audiences around the world. So I’m finding the book really interesting so far.

Michael Horn: Very cool. What about you, Diane?

Diane Tavenner: Awesome, thanks for sharing Okay. Well, I am going to turn to TV because we’ve been talking so often. I’ve exhausted all the books I’m reading right now, and I’m a little slow on this one, about a year behind. But we just watched the series on FX, Shogun, and I was. I must say, I was a little skeptical going in. I was a young kid when the book came out and then the miniseries on tv, and I was like, there’s no possible way this could be done well or without some real issues.

Diane Tavenner: And you all may know it’s won 18 Emmy awards, the most ever for a single season. It’s truly extraordinary and really thought provoking. Yeah. Highly recommend.

Michael Horn: So I was gonna say, you could imagine it winning awards, but someone who’d read the books being like, it still didn’t quite deliver, but it delivered for you, it sounds like.

Diane Tavenner: Well. And I never read the books or watched the original series.

Michael Horn: Okay. Okay. Okay. So.

Diane Tavenner: But I just had this image in my head, and as I understand it, the current version is very different from the old ones, but it’s. It’s great.

Michael Horn: Very cool. It’s been teasing me for a while, so that is a good endorsement. For mine. I. I guess I, I want to say, like, the NFL football playoffs or Australian Open, but I feel like that gives away when we’re recording, but too late, I’ve given it away. But I’ll give you one other. I’ve actually really been enjoying or I enjoyed because I finished it in a day, a book recommendation that one of my daughters gave me, or she actually ordered me to read it.

Michael Horn: She had finished, it’s called the Girl with the Secret Name by Yael Zoldon. And I’ll apologize if I’ve mispronounced her name. But it’s a historical fiction, takes place during the Spanish Inquisition and it was fascinating. It was a history that I knew at a high level, but not with any depth at all, I will say, like literally zero. And so my daughter was teaching me quite a bit. It was fun. So, that’s mine.

Diane Tavenner: I love when that happens.

Michael Horn: Yeah, no, know you’ve had that experience with Rhett giving you many recommendations. So now maybe this is the first of many for me. But I’ll, let’s, let’s wrap up there, Siya, a huge thank you for joining us for shedding light on this topic, for sharing frankly how you are using it in your daily life to both on your learning journey but also in your work itself on, on a day to day basis. So really appreciate it and we hope you’ll keep staying in touch so we can stay ahead of the curve as well alongside you. But huge thank you. And for all of you tuning in, we will see you next time on Class Disrupted.

]]>
Class Disrupted Podcast: Ben Riley on Why AI Doesn鈥檛 Think Like Us /article/class-disrupted-podcast-ben-riley-on-why-ai-doesnt-think-like-us/ Fri, 21 Feb 2025 15:30:00 +0000 /?post_type=article&p=740289 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Techno-optimists have high hopes for how AI will improve learning. But what鈥檚 the merit of the 鈥渂ull case鈥, and what are the technology鈥檚 risks? To think through those questions, Michael and Diane sit down with Ben Riley of Cognitive Resonance, a 鈥渢hink and do鈥 tank dedicated to improving decisions using cognitive science. They evaluate the cases made for AI, unpack its potential hazards, and discuss how schools can prepare for it. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hi there, I’m Diane, and what you’re about to hear is a conversation Michael and I recorded with our guests, Ben Riley. It’s part of our series exploring the potential impact of AI in education, where we’re interviewing optimists and skeptics.

Here are two things from the episode that I keep thinking about:

First, our conversations are starting to make me wonder if AI is going to disrupt the model of education we’ve had for so long, as I think Ben perhaps fears, or if it’s actually going to strengthen and reinforce our existing models of the schoolhouse with classrooms filled with a teacher and students.

The second thing that I was really thinking about and that struck me was that Ben’s sort of one case for what could be beneficial about AI is something that’s directly related to his work and interest in understanding the brain. And kind of how learning occurs. To be fair, there’s a theme emerging across all the conversations we’re having with people where they see value in the thing that they value themselves. And perhaps that’s an artifact of the early stages and who knows, but it’s making me curious.

And speaking of curious, a reflection I’m having after talking with Ben is about the process of change. Ben is a really well reasoned, thoughtful skeptic of AI’s utility in education. He comes to his views at least partially from using AI. I would consider myself much more of an optimist and yet I’m finding myself a little bit annoyed right now, that every time I want to write an email or join a meeting or send a text or make a phone call that I’ve got AI pretty intrusively jumping in to try to help me. And it’s really got me thinking about the very human process of change, which is one of the many reasons why I’m really looking forward to sense making conversations with Michael after all of these thought provoking interviews.

In the interim, we’d both love to hear your thoughts and reflections. So please do share. But for now, I hope you enjoy this conversation on Class Disrupted.

Michael Horn: Hey, Diane. It is good to see you again.

Diane Tavenner: You too. And I’m really excited to be back. Coming off of our last conversation around AI and education, it’s making me even more excited about what we’re going to be learning in this series. And I think today will be no exception in really stretching our minds and our thinkings about the possibilities, the limitations, the potential harms of AI and its intersection with education.

Michael Horn: Yeah, I think that’s right, Diane. And to help us think through these questions, today, we’re bringing someone on the show that I think both of us have known for quite a long time. His name is Ben Riley. He previously founded the Deans for Impact in I believe 2014. And Deans for Impact is a nonprofit that connects cognitive science to teacher training. And then Ben stepped aside a couple years ago, and has most recently founded Cognitive Resonance, which is a think and do tank, in its words, and a consultancy organization that’s really, its focus actually is on this topic of AI and learning, which is perfect and makes Ben the perfect guest for us today. So, Ben, welcome.

Ben Riley: Thanks so much for having me. We’ll see if you still think I’m the perfect guest by the end of it, but I appreciate being invited to speak to both of you.

Ben Riley鈥檚 Journey to the Work

Michael Horn: Absolutely. Well, before we get into a series of questions that we’ve been asking our guests, we’d love you to share with the audience about how you got into AI so deep, specifically because I will confess and I’ll give folks background, I’ve been reading. I’ve actually been an editor on a couple of the things that you’ve submitted into Education Next on AI, and I found them super intriguing. And then somehow I had no idea that you created this entire life for yourself around AI and education. And you have some language on this that I think is really interesting on the site where you say the purpose is to influence how people think about Gen AI systems by actually using the lens of cognitive science. And you believe that will help make AI more intelligible, less mysterious, which will actually help influence what people do with it in the years to come. And then you write that you see it as a useful tool, but one with strengths and limitations that are predictable. And so we really have to understand those if we want to harness them in essence. So how and why did you make this your focus?

Ben Riley: Yeah. Well. And thank you for clearly having read the website’s cognitiveresonance.net or the Substack Build Cognitive Resonance, in many ways, the organization reflects my own personal journey because several years ago I started to become aware that something was happening in the world of AI, and at the time it was called deep learning, and that was the phrase that was starting to emerge. And to be completely candid, my focus has always been, and in some ways still very much is on how human cognition works. And so AI, artificial intelligence, is considered kind of one of the disciplines within cognitive science, along with psychology and neuroscience and linguistics, philosophy. There’s like it’s an interdisciplinary field. And for me, quite honestly, AI was sort of like this thing happening somewhere over there that I had maybe a loose eye on. And I got in touch with someone named Gary Marcus at the time, and we’ll come back to Gary in a second, and then just said, hey, Gary, can you explain deep learning to me and what it is and what’s going on? And that, you know, sort of began that conversation. And then quite frankly, I just kind of squirreled away and didn’t think much about it. And then, like it did for all of us, ChatGPT came into our lives. And I was stunned. I was completely stunned when I first sat down with it and started using it. And what really irked me was that I didn’t understand it. You know, I was like, I don’t get how this is doing, what it’s doing. So I am now going to try to figure out how it’s doing, what it’s doing. And that is not easy. At least it wasn’t easy for me. I don’t think it’s even now. I don’t think it’s easy for those who might have spent their entire lives, much less those of us who are coming in late in the game or just trying to make sense of this new technology in our lives. And what I was able to draw upon was both sort of the things that I do know and have learned over the last decade plus around human cognition and frankly draw on a lot of relationships I have with people who are in cognitive science broadly, and just start having a bunch of conversations, doing a bunch of reading, and really trying to, you know, build a mental model of what’s taking place with these tools and with large language models specifically. And when I finished all that, I thought, well, geez, it seems like, you know, that took a lot of work. Maybe it would be helpful to sort of try to pass this along and bring others into the conversation. So that’s really the thesis of Cognitive Resonance.

AI鈥檚 Educational Upside

Diane Tavenner: Ben, everything you just described is just so consistent with my experience with you over the years and the conversations that we’ve had and what my perception is what you care about. And I’m so glad you brought it together in that way, because I’ll be honest, when I was like, wait, Ben is doing AI? Like, that didn’t totally land with me. And so what I’m hearing from you is like, well, I’m super curious for this conversation because I’m. I’m not getting the vibe that you’re a total AI skeptic. I’m not getting the vibe that you’re a total cheerleader. I’m guessing we’re gonna have a really nuanced conversation here about this right now. So let’s start there. Like, let’s start with kind of that polar, and then see where we go. Can you make the argument for us of how AI is going to positively impact education? And I’m not saying it has to be your argument, but can you just stand up an argument for us based on what you’ve learned about how it could. Like, what’s the best case to be made for AI positively impacting it?

Ben Riley: Yeah. So this is what people are now calling steel manning, right? Like, can you steel man the argument that you may not agree with. I had a law school professor who taught me that the best way to write a good legal brief is to take the other side’s best argument, make it even better than they can make it, and then defeat it. And you all gave me this question in advance, and I’ve been thinking about it since you did, and I don’t know if I can make one best case. What I want to do is make three cases which I think are the positive bull cases. So number one, one that I think should be familiar to both of you because we’ve been having this debate for nearly a decade, is sort of personalized learning, a dream deferred, but now it can be real. When we said we were going to use big data analytics and use that to figure out how to teach kids exactly what they want to know, when they need to know it. Like, what we meant was we needed large language models that could do that. And now, lo and behold, we have that tool. And as Dan Meyer likes to joke, it can harness the power of a thousand suns. It’s got all of the knowledge that’s ever been put into some sort of data form that can be scraped from the Internet or from other sources, not always disclose what those sources are, but nonetheless, there’s a lot of data going into them and using these somewhat mysterious processes that they have of autoregression and back propagation. And we can go as deep as you want in the weeds on some of those terms, but we doing that, we can actually finally give kids like an incredibly intelligent, incredibly patient, incredibly, some would even say loving, some have said that, tutor. And we can do that at scale, we can probably do it cheaply. And boom, Benjamin Bloom’s dream, two sigma gains. It’s happening finally. There we go. All right, so that’s argument number one. Call that personalized maximization argument. Argument number two, I think, is the sort of AI as a fundamental utility argument. And the argument here is something along the lines of, look, this is a big deal technologically in the same way the Internet or a computer is a big deal technologically, and it’s one of those technologies that’s going to become ubiquitous in our society, the same way the computer or the Internet has become ubiquitous in our society. And we don’t even know all the many ways in which it’s going to be woven into the fabric of our existence. But that includes our education system. And so some benefits will accrue as a result of its many powers. Okay, so that’s the utility argument. The third argument would say something like this. It would say the process of education fundamentally is the process of trying to change mental states in kids. And I mean, frankly, doesn’t have to be kids, but we’ll just talk about it from teachers to students.

Michael Horn: Sure.

Ben Riley: And, there’s some really big challenges with that. When you just distill it down to the act of trying to make a kid think about something. One of the challenges is that we cannot see inside their head. So the process of what’s taking place, cognition or not, is opaque to us, number one. And number two, experiments are really, really hard. They’re not impossible. But you can’t really do the sort of experiments that you can do in other realms of life the same way. It’s just for ethical reasons, but also just frankly from like scientific, technical reasons. Because again, we can’t see what’s happening in the head. So even when you run an experiment, you’re getting approximations of what’s happening inside the head. Some would then say, well, now we have something that is kind of like a mind and we can kind of emphasis on kind of, see inside it. And we definitely can run experiments on it in a way that doesn’t implicate sort of the same ethical concerns and others. That argument, and I’ll call that the cognitive arguments, human and artificial, would say that can use this tool to better help us understand ourselves. In some ways it might help us by being similar to what’s happening with us, but in other ways it might help us by being different and showing those differences. So those are the three arguments that I see.

Evaluating the Case for AI

Diane Tavenner: Yeah. Super interesting. Thank you for making those cases. Which of any of them do you actually believe? Now you, I’m curious about your opinion and why?

Ben Riley: Yeah. So I have bad news for you. The first one, the personalized maximization dream, is going to fail for the same reason that I would like to say I predicted that personalization using big data analytics would fail. We could spend the entire podcast with me unpacking why that is. I’m not going to do that. So I’m going to limit it just to two arguments. Okay. The first would be that these tools fundamentally lack a theory of mind. Okay. So that’s a term that cognitive scientists will use for the capacity that we humans have to imagine the mental states of another. And these tools can’t do that. There’s some dispute in the literature and researchers will say, well, if you run these sort of tests, maybe they’re kind of capable of it. I’m not buying it. I don’t think it’s true. And there’s plenty of evidence on the other side as well saying that they just don’t have that capacity. Fundamentally, what they’re doing is making predictions about what text to produce. They’re not imagining a mental state of the user who’s inputting things into it. Number two, I would say, is that it obviously misses out on a huge part of the cultural aspect of why we do and why we have education institutions and the relationships that we form. And I think that the claim that students are going to want to engage and learn from digitized tutors the likes of which Khan Academy and others are putting out, I think is woefully misguided and runs counter to literally thousands, if not hundreds of thousands of years of human history. Okay, so number one, doomed. Number two is to me like a kind of like, so what? Right? So I use the example of computers and the Internet as ubiquitous technologies that AI might join. So, like, let’s say that’s true. Let’s say that comes to pass. So what? Like, we have the Internet now, we have computers now. We’ve had both of these things for decades. They have not, I would argue, radically transformed education outcomes. The ways in which technologies like this become sort of utilities in our lives, transforms our day to day existence. But just because a technology is useful or relevant in some way or form does not mean emphasis, does not mean that it is somehow useful for education purposes and for improving cognitive ability. So I have absent a theory as to in what ways these tools are going to do that. Whether or not they become, you know, ubiquitous background technologies is kind of a, so what for me. Number three, the argument, the cognitive argument that this tool could be a useful example and non example of human cognition, I have a great deal of sympathy for. I am very curious about. There’s a lot, a lot that has changed just within linguistics, I would say, in the last several years in terms of how we conceptualize what it is these tools are doing and what that says about how we think and deploy language for our own purposes. We may have just scratched the surface with that. The new models that are getting released that are now quote unquote reasoning models have a lot of similarities in their functionality to things in cognitive science like worked examples and why those are useful in helping people learn. A worked example being something that sort of lays the steps out for a student as to here, think about this, then think about this, then think about this. Well, it turns out if you tell a large language model, do this, then do this, then do this, do then this, or just sort of program it to do that, their capabilities improve. So you know, without sounding too much like I’m high on my own supply, this is the cognitive resonance enterprise. It’s sort of to say, okay, let’s put this in front of us and instead of focusing so much and using it as a means to an end, let’s study it as an end unto itself, as an artificial mind, quote unquote, and see what we can learn from that.

Michael Horn: Super interesting, Ben, on, on that one. And I’m just thinking about an article I read literally this morning about where it falls short of mimicking, you know, the true neural networks, if you will, in our brain. So I’m pondering on that one now. I guess I, before we go to the outright skeptic take if you will, I’m sort of curious on like other things that you think AI won’t help with in your view, beyond what you just listed in terms of, you know, this broad notion of personalizing learning or AI as utility, if you will, and, and the so what question, like are there other things that people are making claims around where they think AI is really going to advance the ball here. And you’re like, I just, I don’t see that as a useful application for it.

Ben Riley: Well, you know, we launched into this conversation and we didn’t define what we’re talking about when we talk about AI. Right, sure.

Michael Horn: There’s different streams of it. Yep.

Ben Riley: Yeah. And I think that, like, when I’m talking about AI, and least have been talking about it in this context thus far, I’m talking about generative AI, mostly large language models, but it includes any sort of version of generative AI that is in essence, sort of pulling a large amount of data together and then sort of trying to make predictions based on that, using sort of an autoregressive process or diffusion in the case of imagery, but sort of like trying to essentially aggregate what’s out there, and as a result of that, aggregation produce something that sort of relates to that. If you’re talking about beyond that, like, who knows? I mean, there’s just so many different varied use cases. There’s, I was mentioning off air, but I’ll say now on air, there’s a great book, AI Snake Oil, written by a couple of academics at Princeton, which talks about sort of the predictive AI, which they put in a sort of separate category from generative AI, and they’re very skeptical about any of those uses. My fundamental thing is that to the extent people think like the big claim, right? And unbelievably, Sam Altman, the CEO of OpenAI, just a few days ago declared that, like, we’ve already figured out how to create artificial general intelligence. In fact, that’s like a solved problem. Now we’re on to super intelligence. I think people should be very, very skeptical of that claim. And there’s a lot of reasons why I would say that, which again, could eat up the entire podcast. But I’ll just give you one. What we now know is true, I think from a scientific perspective about human thought, is that it exists, it does not depend on language. Language is a tool that we use to communicate our thoughts. So if that’s true, and I would argue in humans, it is almost unassailably true. And I can give you the evidence for why I think we think that or why we know that, then it would be very strange if we could recreate all of the intelligence that humans possess simply by creating something like a large language model and using all of the power of all the Nvidia chips to harness what’s in that knowledge. Now what people will say, and frankly, this is where all the billions and the leading thinkers on this are trying to do is okay, well now we can only go so far with language. How about we try to do it for other cognitive capacities? Can we do that? Can we create neuro symbolic, as it’s called, AI that is as powerful, powerful as generative AI with large language models and sort of start to piece this together in the same way that we may piece together various cognitive capacities in our own brain and then loop that together and call it intelligence. To which I say, well, good luck. I mean, honestly, good luck. But there’s no reason to think that just because we’ve done it with large language models that we’re going to have the same sort of breakthroughs in the other spaces. So don’t know if this fundamentally answers your question, Michael, but I would say that it’s sort of like, you can have progress in this one dimension. It can actually be quite fascinating and interesting. But I would urge people to sort of slow down in thinking that it just means that, you know, all of science and humanity and these huge questions around whether we will ever be able to fully emulate the human mind have suddenly been solved.

The Skeptical Take 

Diane Tavenner: Yeah. Wow. So fascinating. I have so many things coming to me right now, including my long journey and experience with people who make extraordinary com, you know, claims and then kind of make the work a little bit challenging for the rest of us who are actually doing it behind them. But let’s turn now, we’re kind of steering in that direction, but let’s go all the way in on the skeptical take. And so I feel confident you’ve got some good material here for us. Like what is AI going to hurt specifically in education? Let’s start there, and how’s it going to do harm?

Ben Riley: Yeah, well, I don’t think we should use the hypothetical or the future. Let’s talk about what it’s harming right now. So I mean, the big danger right now is that it’s a tool of cognitive automation. Right? So what it does is fundamentally offer you an off ramp to doing the sort of effortful thinking that we typically want students doing in order to build the knowledge that they will have in their head that they can then use in the rest of their life. And this is so fundamentally misunderstood. It was misunderstood when Google was starting to become a thing and the Internet was becoming a thing. You would hear in education, well meaning people say, well, why do we need to teach it? If you can Google it. Right? That was a thing that many people said, put up on slides. I used to stop and listen and look. It makes sense if you don’t spend any time with cognitive science and you don’t spend any time thinking about how we think. And so I don’t, I don’t want to throw those people too far under the bus, but just a little, because now we know. We know this. Like, this is a scientific, like, as established as anything else is established. It’s like our ability to understand new ideas in the world comes from the existing knowledge that we have in our head. That is the bedrock principle of cognitive science, as I like to describe it. So suddenly we have this tool that says, you know, to the extent you need to express whether or not you have done this thinking, let me do that for you. You know like, this exists in order to, to, to solve for that problem. And guess what? It is very much solving for that problem. Like, I think the most stunning fact that I have heard in the last year is that OpenAI says that the majority of its users are students. Okay, the majority. Now, I don’t know what the numerator and denominator is for that, and I’m talking to some folks trying to figure that out, but they have said that at the OpenAI education conference, Lea Crusey, who some of you may know who was over at Coursera, got up and said, and they said, and I think they meant this is like, they were happy about this, that their usage in The Philippines jumped 90% when the school year started. What are those kids using it for? Yeah, you know, what are those kids using it for? Like, I don’t think, like, we need to stop pretending that this isn’t a real issue. And for me, people sort of go, well, it’s plagiarism, you could always plagiarize. And it’s like, not exactly. Not exactly like. And I think it actually is sort of both overstates and understates the case to talk about it in the context of plagiarism. Because again, the real issue here is that we will lose sight of what the education process is really about. And we already have, I think, too many students and too much of the system sort of oriented around get the right answer, produce the output. And I think teachers make this mistake, unfortunately, too often, I think a lot of folks in the system make this mistake of we just want to see the outcome and we are not thinking about the process because that’s really what matters. And building that knowledge over time. And you’ve got now, I mean I literally sometimes lose sleep over this. You’ve got a generation of students whose first experience of school was profoundly messed up because of the pandemic. And then right on top of that, we have now introduced this tool that can be used as a way of offloading effortful thinking. And I don’t think we have any idea what the consequences are going to be for that cohort of students and the potentially, like, dramatic deficiencies in a quality education that they will have been provided. That’s one big harm. There’s another. I mean, there’s many others, but there’s another that I’ll highlight here, too. I don’t know if you, either of you watched, I imagine you did, the introduction of ChatGPT multimodal system last year, which included the family Khan, Sal Khan and his son Imran were on there. I thought it was fascinating and speaks again to the amount of users who are students that OpenAI chose Saul and his son to debut that major product. If you watch that video closely, and you should, you’ll see something, I think, that is worth paying attention to, which is at multiple points, they interrupt the multimodal tutor that they’re talking to. And why not, right? It’s not a life form. It doesn’t have feelings. And we know that, it’s a robot. You know, to a degree. I don’t think we’ve really grappled with the implications of introducing something like human like into an education system and then having students who are students who are still learning about how to interact with other humans, that’s another part of education and saying, you know what, it’s okay to behave basically however you want with this tool, right? Like the norms and the sort of, you know, ways in which schools inculcate values and inculcate, sort of how it is we relate to one another could be profoundly affected in ways that we haven’t even begun to imagine, except in the realm of science fiction. And I think it’s worth looking at science fiction and pointing to how we tell these stories. I don’t know if either of you watched HBO’s Westworld, particularly the first season before the show went off the rails. But if you watch the, if you watch.

Diane Tavenner: Season one was a little intense, too.

Ben Riley: Season one was intense, but it was good. I thought it was good. And, and, but it was haunting. And one of the things that was haunting about it is it’s like for those who haven’t watched the show, it’s a It’s filled with cyborgs who are quasi sentient, but they, you know, people come and they’re at amusement parks and it’s like the old west and what can you do? You can kill them. You can kill them and people do that or worse.

Diane Tavenner: Right, yeah. Well, talk about the other bad thing.

Ben Riley: Right, right. I mean, but, you know, but it’s sort of like the fact that we now can imagine that sort of thing being a future where you could like humans, but not. The philosopher Daniel Dennett, who passed away, talked about the profound dangers of counterfeiting humanity. And I think that’s the sort of concern that is just almost not even being discussed at any real level as we start to see this tool infect the education system.

AI鈥檚 Impact on How We Think

Michael Horn: I suspect that’s going to be something we visit a few times in this series. But you’ve just, you’ve done a couple things there. One, you’ve, I think, more articulately answered, you know, a lot of the bad behavior we’ve seen on social media. How that actually could get exacerbated is not through deep fakes per se, but in terms of actually how we relate to one another. But you also answered another one of my questions that I’ve had, which is I can’t remember a consumer technology where education has been the featured use case in almost every single demo repeatedly. And you may have just answered that as well. I’m curious, a different question because I know you and Bror Saxberg have had sort of a back and forth about, you know, where is certain things that maybe it’s harming going to be less relevant in the future. And he loves to cite the Aristotle story. Right. About we’re not going to be memorizing Homeric length poems anymore. And maybe that’s okay because it freed up working memory for other things. I’m sort of curious to get your reflection on that conversation at the moment because I think Diane and I would strongly agree. Replacing effortful thinking, thinking that you can just, you know, have people not grapple with knowledge and build mental models and things like that, that’s going to have a clearly detrimental impact. Are there things where you say actually it’s going to hurt this, but that may be less relevant because of how we accomplish work or something like that in the future? I don’t know your take on that.

Ben Riley: Yeah, I don’t think you’ll like my answer, but I’m going to give you my honest answer.

Michael Horn: I don’t know that I have an opinion. Like, I’m just curious.

Ben Riley: Yeah, I mean, I’m not a futurist and I’ve made very few predictions ever in my life, at least professionally. One of the few that I did was that I thought personalized learning was a bad idea in education. And I’d be curious, I don’t know in this conversation another, whether you two reflecting back on that would go actually, you know, knowing what we know now, there were reasons to be skeptical of it and the, the I’m annoyed at the turn he seems to have taken because I used to like to quote Jeff Bezos. So with all the caveats around, you know, Jeff Bezos and anybody right now from big tech, he has said something that I think is relevant, which is he said, he’s asked all the time, you know, how the, what’s going to change in the future and how to prepare for that. And he says that’s the wrong question. He says, you know, the thing that you should plan around is what’s not going to change. He’s like, when I started Amazon, he was like, you know, I knew that people wanted stuff, they wanted variety, they wanted it cheap and they wanted it fast. And he’s like, that, as far as I could tell, wasn’t going to change. Like, people weren’t going to like, I want to spend more or take longer to get to me. And it’s like I said, once you have the things that won’t change, build around those. So I said it earlier, I’ll say it again. The thing that’s not going to change is fundamentally our cognitive architecture is the product of certainly hundreds of thousands, if not millions of years of biological evolutionary processes. It is further, I think, the product of thousands of years, tens of thousands of years of cultural evolution. We now have something, we have digital technologies that can affect that culture. So it does not mean, and I am not contending that our cognitive architecture is some sort of immutable thing, far from it. But on the other hand, it would suggest that what we should do is A, not plan around changes that we can’t possibly imagine, but B, maybe more importantly, and I would say this to both of you, not try to push for that future, you know, that we should fundamentally be small c, very small c, conservative about these things, because we don’t know, you know, I don’t know what the amount of time that took place back in Socrates and Aristotle’s time in terms of the cognitive transitions that took place, but they took place. My strong hunch not so much as the product of any deliberate choice, but to get a sort of social conversation about which ways in which should we talk to one another. And it was clearly the case that writing things down proved to be valuable in many dimensions. It may prove to be the case that having this tool proves very valuable in many dimensions. But let the time and experience sort that out rather than trying to predict it.

What Schools Can Do To Prepare

Diane Tavenner: Super helpful. I love where you’re taking us, which is into actual schools. So I appreciate that you’re like, let’s talk about what’s actually happening right now. And, you know, that is where my, like, heart and work always is, is in real schools. And so given what we are seeing, what you’re articulating about what’s actually happening right now in schools, and given that, well, I won’t say it as a given. What do schools need to do to mitigate the challenges you just said to, to recognize this as a reality that is coming our way that maybe can’t be put back in the box. Now, I’m going to say that with a caveat because I’m reading in the last day or two too, that it’s people declaring, you know, that they’ve won the cell phone war and cell phones are going to be out of schools here pretty soon. So maybe, maybe you actually believe it’s possible to kind of put it back in the box in schools. But, like, what’s the impact on schools and what do they do literally right now, given what you’re saying is actually happening already?

Ben Riley: Yeah. So great questions, all of them. So, I mean, thank you for bringing up the cell phone example, because I cite that often and even before there was this sort of wave now, both at the international level, national level, state by state, district by district, to suddenly go, these tools of distraction aren’t great for the experience of going to school and having you concentrate on hopefully what the teacher is trying to impart through the act of teaching. So we can, it’s not easy, but we can take control of this. Nothing is inevitable. So, you know, people always say, well, you can’t put it back in the box. You know, AI will exist, but how do we behave towards it? What ethics and norms do we try to impart around it? These are all choices we get to make. I like the phrase, and I’m borrowing this from someone named Josh Break, who’s a professor at Harvey Mudd. He has a wonderful Substack called I think It’s Just the Absent Minded Professor. But he writes a lot about AI in education. And his phrase is just you have to engage with it, but that doesn’t mean integrate. Right? So what I do think, you know, Diane, you kept saying schools. I just think it’s teachers, educators need to engage with it. That can still mean that the answer after you engage with it is no, not for me, and also no, not for my students. I think that’s a perfectly acceptable thing to say. And look, maybe the students won’t follow it, but that, you know, you’ve done what you can, right? And, and that is all you can do. There’s a teacher out there who I’m desperately trying to get in touch with, but she made waves. Her name is Chanea Bond. She teaches here in Texas. She made waves on Twitter a while back by saying, look, I’ve just banned it from my kids because it’s not good for their thinking. People are like, what? And it was like, she was like, yeah, no, it’s not good. Like it’s interfering with their thinking. So I’ve banned it. So that’s a perfectly reasonable answer. I also think that, you know, once you start to understand it at a basic level, I’m not talking about getting a PhD in back propagation and artificial neural networks, but just starting to understand it, you’ll start to understand why it’s actually quite untrustworthy and fallible and that you know, if you just think that everything it’s telling you is going to be accurate, you have another think coming, you know, and one of the things in the workshops that I’ve led that I’ve been very satisfied by is when people come out on the other side of them, they’re like, yeah, okay, so this thing isn’t reasoning and it’s not this all knowing oracle. And once you have that knowledge, once you’ve demystified it a bit, I think it gets a lot easier to sort of grapple with it and make your own choices and your own decisions about how you want to do it. I will say that right now, in the education discourse, it’s like, you know, things are way out of balance between sort of the hype and enthusiasm versus the sort of, hey, pump the brakes, or at least have you thought about this, if you’ll forgive me, but again, sort of, you know, it’s a, it’s a free resource. But if you go to cognitiveresidence.net we’ve put out a document called the Education Hazards of Generative AI, which literally just tries to, in very bite size and hopefully accessible form, sort of say, here are all the things you really need to think about and might be some cautionary notes across a number of dimensions, whether you’re using it for tutoring or material creation, for feedback on student work. Like, there’s a lot of things that you need to be thinking about and aware of. One of the things that frustrates me is that I see a lot of enthusiasts and this ranges from nonprofits to the companies that make these tools, sort of saying, well, teachers, fundamentally, it all falls to you. Like, if this thing is not factual or it hallucinates, like, it’s your job to fact check it. And it’s like, well, come on, like, A, that’s never going to happen, and B, like, not fair, you know, like not fair to put that on educators and just kind of wipe your hands clean. So I do think that’s something that, like, we’re still going to have to sort of sort through society on a, you know, social level as well as within schools and well as like individual teacher and ultimately students are going to have to bear some agency themselves about what choices they make around whether and how to use it at all.

What We鈥檙e Reading and Watching 

Diane Tavenner: I’m so appreciative of this idea of agency here. And I do think that that’s like, certainly a place that I’ve always been and is core to my values and beliefs as an educator is the importance of agency, not only for educators, but for young people themselves. And so, I love that this is such a rich conversation. We go on and on and on. But I feel like maybe leave it there. Like really real people, real teachers, real students, real agency. So grateful for everything that you brought up, so much to think about. And we’re gonna pester you for one last thought, which is Michael and I have this ritual of, at the end of every episode, we share what we’ve been reading, watching, listening to. We try to push ourselves to do it outside of our day jobs. And sometimes we seep back into the work because it’s so compelling. And so we want to invite you, if you have thoughts for us and to share them.

Ben Riley: So I told you I had a weird one for you here. So I was just in New Orleans and when I was in high school, for reasons that I won’t go in detail here, my family got really into the Kennedy assassination and the movie JFK by Oliver Stone came out. And I don’t know whether either of you have watched that film in a long time. It’s an incredible movie. It’s also filled with lies and untruths, and it’s much like in large language.

Michael Horn: I think we watched it in high school, but keep talking.

Ben Riley: Yeah. Yeah. Well, the thing that, the reason I bring it up is because Lee Harvey Oswald lived in New Orleans in the summer of 1963. And that movie is based on the case that was brought by the New Orleans District Attorney, a guy named Jim Garrison. But there’s a bunch of real life people who are in that movie or portrayed in that movie. And I just started to think about accidents of history where all of a sudden you could be, you know, just a person of relative obscurity as far as, you know, anyone broadly paying attention to your life. And all of a sudden something happens and now you become sort of this focus of study. And trust me when I tell you that every single person who had any connection with Lee Harvey Oswald in his life has become this object of study to people and books have been written. And so I’m trying, this is very bizarre, I know, but what I’m trying to do is think about and understand what it is like for people in that situation. Like what it is like to suddenly have your story told that you don’t have control of it anymore, you know, and if you know where, this isn’t supposed to be work related but in a way I think it does connect backup because it goes back to the fact that these tools are taking a lot of human created knowledge and sort of reappropriating it for their own right. And we haven’t got touched on that. I don’t think we need to now. But it’s sort of like it’s, there are a lot of artists who feel a profound sense of loss because of what’s happening in a our society today. That’s another thing I think worth thinking about.

Diane Tavenner: Wow, you’re right. I didn’t see that one coming. But it’s fascinating. Thank you for sharing it. I am unfortunately not going to stray from work today. I can’t help myself. Three of my very good friends have recently released a book called Extraordinary Learning for All. And that’s Aylon, Jeff Wetzler, Janee Henry Wood. And it’s really about the story of how they work closely with communities on the design of their schools and in a really profound and inclusive way. And so I’m deep in that, been involved in that work for a long time and think it’s just a really powerful kind of inspiration slash how to guide of how communities can really take agency over their schools and own them and figure out what they want and what matters and what they need and how they design accordingly.

Michael Horn: So I was gonna say now, Jeff has appeared twice in a row in our book recs, I think, on episodes or something like that. So love that. Diane, I’ll wrap up with saying I’m gonna go completely outside of, I think, the conversation today. But, Ben, you may say it actually relates as well, because I’ve been binging on season two of Shrinking. I loved season one and season two, with the exception of a couple episodes in the middle has been no exception, I think. So I’m. I’m really, really enjoying that so far. And I suppose you could connect that back to.

Ben Riley: What is Shrinking? I don’t know. I have to. I don’t know what it is.

Michael Horn: Okay, it’s basically about three therapists in a practice and one who’s grappling with the deep personal tragedy. And Harrison Ford is outrageously hilarious. Yeah.

Diane Tavenner: So good. It’s so good. Okay, well, I’m gonna tag on to your, you know, out of work one and say yes, we love Shrinking as well.

Michael Horn: Perfect. Perfect. All right, well, we’ll leave it there. Ben, huge thanks for joining us. For all of you tuning in, huge thanks for listening. We look forward to your thoughts and comments off this conversation and continuing to learn together. Thank you so much as always, for joining us on Class Disrupted.

]]>
Class Disrupted: How AI is Democratizing Access to Expertise in Education /article/class-disrupted-how-ai-is-democratizing-access-to-expertise-in-education/ Fri, 07 Feb 2025 17:30:00 +0000 /?post_type=article&p=739641 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

On this episode, John Bailey, who advises on AI and innovation at a number of organizations, including the American Enterprise Institute, Chan Zuckerberg Initiative, and more, joins Michael and Diane. They discuss AI鈥檚 potential to democratize access to expertise, weigh the costs and benefits of its efficiency-boosting applications, and consider how it will change skills required for the workforce of the future.

Listen to the episode below. A full transcript follows.

Michael Horn: Hi, everyone. Michael Horn here. What you’re about to hear is a conversation that Diane and I recorded with John Bailey as part of our series exploring the impact of AI on education, from the good to the bad. Here are two things that grabbed me about this episode that you’re about to hear. First, John made the point that this technology is really different from anything we’ve seen before. Specifically, how these large language models could, from the get-go, produce artifacts of work that would rival what an entry-level person in a variety of professions would create. And how we’re just scratching the surface of their capabilities. And most people don’t even realize that yet. So what could this mean for education? Second was John’s observation that just because we can do something faster doesn’t mean it’s being done

better. Said differently, making the wrong work more efficient isn’t necessarily the right solution. Now, when we finished up the interview, I had several reflections. But one I wanted to share with you now is this. John’s big framing is that through AI, everyone now has access to an expert in virtually every field. So if the internet democratized access to information, the analogy essentially is AI is democratizing access to expertise. But I’m curious if someone isn’t as skilled or knowledgeable or experienced as John, would they know what to do with or how to use such an expert at their fingertips? I’m excited to be in conversation with Diane for more sensemaking after we’ve talked with a number of people. And we’d love to hear your thoughts and reflections. So please, please share, whether over social media or by dropping us an email through my website at michaelbhorn.com. But for now, I hope you enjoy this conversation on Class Disrupted.

Diane Tavenner: This is Class Disrupted, season six, and the first. I know. Can you believe it? The first of our AI interviews. And we, in this case, we have the first best person, John Bailey, as our guest. Hey, Michael.

Michael Horn: Hey, Diane. Good to see you.

Diane Tavenner: It is always great to see you. There’s so many things we could talk about. But I’m really eager to jump in today to our topics. We’re going to go there right away. When we kicked off last season of this podcast, Class Disrupted, we said that one of the things that we really wanted to delve deeper into was our curiosity around AI. And it’s hard not to be curious about AI right now. In our most recent episode, we were pretty straightforward about kind of where each of us are at this point in time and our understanding and our perspectives. And we overviewed some of the kind of current debates that are taking place specifically around education and AI. And today we get to go deeper with someone who, I think you’ll agree with me, frankly, knows a lot more about AI than both of us.

Michael Horn: So I agree with that. I think it’s very fair. It’s one of the many reasons I’m excited for this conversation, because, as you said, it’s going to be the first of many where we bring folks on who, frankly, have very different views from each other around the impact of AI, sometimes from ourselves as well. And so to start this, we’re welcoming back someone to the show who’s been with us, I think, twice before. So this is like a three peat, if you will. So he’s clearly one of our favorites. None other than John Bailey.

John Bailey: It’s so, so good to be on. Congrats. Six seasons. That’s huge.

Michael Horn: Yeah, we’re still kicking, right?

Diane Tavenner: Thank you. And just in case anyone has missed John previously, quick, quick background here. John’s served in many, many posts in the state and federal government around education and domestic policy more generally. He’s a fellow at AEI. He holds numerous posts supporting different foundations. I could go on and on and on, but what some people might not know, John, is that you originally entered education as an expert on technology and ed. And, you know, we’ll hear that expertise coming through because you have gone deep in the world of AI and how it’s going to impact education, and so, welcome. We are so excited to have you back.

John Bailey: Oh, my gosh, I’m so excited to be here, and I just admire both of you and I’ve learned so much from you. So it’s so good to be on the show today.

John鈥檚 Journey to Education AI Work

Diane Tavenner: Well, before we get into a series of questions we have for you, we’d love to just start with how, I guess how. And maybe it’s a how/why did you go so deep into AI specifically? We know you have a lot of experience with sort of frontier models, and maybe you can describe that term for us as well as we sort of begin this conversation. But tell us how you jumped into the deep end and come to this conversation.

John Bailey: It’s such a good question. And it’s also like, my point of entry into this was interesting because, as you mentioned, I’ve been involved in a lot of technology and policy intersections for a number of years, including in education. And if I have to admit, like, I’ve been part of a lot of the hype of, like, we really think technology can personalize learning. And often that promise was just unmet. And I think there was, like, potential there, but it was really hard to actualize that potential. And so I just want to admit up front, like, I was part of that cycle for a number of years. And. And then what happened was when ChatGPT came out in December of 2022, everyone had sort of like a moment of ChatGPT, and for me, it wasn’t getting it to write a song or, you know, a rap song or. Or a press release. It was. I was sitting next to someone with a venture team and I said, what is, like, what is an email you would ask an associate to do to write a draft term sheet? And she gave me three sentences. I put it in ChatGPT and it spit back something that she said was a good first draft, good enough for her that she would actually run with it and edit it. And I was like, oh, this is very different. And then it just sort of started this process of seeing, like, what else could it do? And it just became insanely fun to kind of play with it. And then I was posting a lot of this on Twitter, and that caught the attention of some of the AI companies. And then they gave me early access. So I got to play with something called Code Interpreter for OpenAI, which was the ability of analyzing spreadsheets and data files, and then did some work with Google beta testing, Bard, and a handful of other things as well. And so I get to work with some of the companies now on safety and alignment testing, but also seeing kind of a little bit what’s over the horizon, Google Notebook LM I’ve been playing with for the better part of Over a year and giving them some feedback on it. So I think what’s happened though is that for me this feels very, very different from all the other technologies I’ve been exposed to at least over the last 20 years. And that has caught my excitement. I’ve rearranged my entire work portfolio to spend more time on this, just because it’s rare to see something that I think is going to be so transformative. I don’t think that’s going to be immediate. I think that’s going to play out over years and over decades. But also just the pace at which this technology is improving and new capabilities are being introduced is something like I’ve never experienced. In just the last two weeks of December, you saw so many announcements from OpenAI and Google that you can’t even wrap your heads around it. So better models that do deeper reasoning did not get a lot of attention. But OpenAI released Vision Understanding so now you can use your camera. And so I walked around a farmer’s market and it analyzed all the produce and the meats and it was giving me recipes on the fly.

Diane Tavenner: Yeah, we were playing with it at the holiday dinner table. Yeah. And just like what, what’s on the table and what are ,you know, and, and I think the amazing thing was with my 82 year old mother in law who was like into it and so, and wanted us to get it on her phone so she could go show her friends.

John Bailey: Oh yeah, it’s. Yeah. I mean it just feels different. It feels like something I want to just dedicate a lot more time and attention to understanding it. Both the benefits, lots of risks, lots of challenges on it. But it just like I’ve seen, you know, my mom’s using it to your point, like it’s just an advanced voice and the style of. Is just great entertainment for kids too with telling stories and whatnot. So anyway, so that’s my journey into this space.

The Best Case Scenario for AI 

Michael Horn: My kids have started to leapfrog me by just taking their search inquiries right to ChatGPT themselves and then get frustrated with some of the answers. Let’s dive in then John, because you’re getting to see a lot of these large language models clearly up close. You’re getting to experiment and help advise these companies that are at the leading edge in many cases. And I think what we want to do in these conversations, frankly is have both the advocates for and skeptics of AI and you clearly have a little bit of both from what you just said, make the case for both sides. You know, how’s it going to impact positively, how’s it going to impact negatively? So we can start to unpack the contours and figure out where the puck’s really going in classrooms and schools. And so I’d love you to start with this, which is to make the argument for how AI is going to positively impact education first. So leave aside your concerns and skepticisms for a moment and in your mind, like what’s the bull case, if you will, for AI?

John Bailey: One is, I think you have to do a lot, I’ve been wrestling with this a little bit. I think most of the other technologies up until this point have been about democratizing access to information. So that’s everything from the printing press to the computer, like CDs and with disks to then the Internet, the Internet democratized access to Wikipedia and you could get any information you want within your fingertips for almost no cost whatsoever. What I think is different about this technology is that it’s access to expertise and it’s driving the cost of accessing expertise almost to zero. And the way to think about that is that these general purpose technologies, you can give them sort of a role, a Persona to adopt. So they could be a curriculum expert, they could be a lesson planning expert, they could be a tutoring, and that’s all done using natural language, English language. And that unlocks this expertise that can take this vast amounts of information that’s in its training set or whatever specific types of information you give it, and it can apply that expertise towards different, you know, Michael, in your case, jobs to be done. And so for the first time, teachers have experts available at their fingertips, just typing to them the way they would type to a consultant. So give me a lesson plan. Here’s an IEP of a student, help me develop three lessons that I can use for that student that’s based on their learning challenges and the interests that they care about. So I think that’s going to unlock both, it’s going to be an enormous productivity tool for teachers potentially. I think it’s also going to be an amazing tutoring mechanism for a lot of students as well. Not just because they’ll be able to type to the student, but as we were just talking about, this advanced voice is very amazing in terms of the way it can be very empathetic and encouraging and sort of prompting and pushing students, it can analyze their voice. And then this vision understanding which was just sort of introduced. Google’s had this in a studio kind of lab format for a couple months now, but I think that’s going to just unlock, imagine a student to be able to do a project and presentation and having an AI system give them feedback and encouragement. That is like science fiction two years ago. And it feels like it’s very much within the realm of possibility. Maybe not right now, but you see the building blocks for where that could actually be assembled into a pretty powerful set of tools for both teachers as well as students.

Diane Tavenner: So John, when you, when you step back from everything you sort of just described of what’s possible in schools, teachers. Well you didn’t say schools. So among teachers and students, I sort of mental mapped a school on top of that concept. What part of that do you actually believe is going to be real, you know, for students and teachers and why. And maybe I think you’re probably going to put a timeline on it too is my guess based on what you’re saying.

John Bailey: Yeah, I think, I mean if other industries are a bit of a roadmap here, what you’re seeing in almost all the other sectors is that where AI is getting deployed first is a lot of back office functions. It’s in their IT shops. With coding, we don’t have that in education. But there are other, a lot of back office things where again the benefits can be pretty high and the risks of it being wrong are a little bit less than if like it’s engaging in a tutoring lesson with a student and hallucinating. That’s like high risk. Right. And so, you know, I suspect we’ll see a lot more sort of back office improving parent communications. I think we could see this, you know, beginning. There’s already been, you know, decades of legacy of trying to use AI or technology computer based scoring for assessments. I could imagine that. And then I think you’re going to see it roll out with a handful of tools for teachers. You’re seeing companies like that already with like brisk teaching. But also, I mean all these capabilities we were just talking about with Google, I mean they, if the moment they flick a switch and roll that out over Google classroom, that’s bringing AI into 60, 65% of classrooms and teachers around the country. And, so I think what you’re going to see is a lot of teacher productivity tools and then over the next, let me call it two to five years, a lot more sort of student facing things. As those technologies mature and as we build more robust products around it that have some of the safeguards that you want and need that ensure accuracy and quality as well as safety, I think for students as well. So I think there’ll be a lot of potential, but I think we’ll roll it out to students over a longer period of time. Meanwhile, like the teacher productivity, you know, enhancements for this could be pretty huge immediately.

The Risks

Michael Horn: It’s interesting to think about building off that Google classroom platform and just the access. Right. That solves in terms of distribution that perhaps historical products have struggled with in schools and gaining access to teachers and students. Let’s turn to the other side for a moment, John, and just like, where is AI not going to help things with teachers, students, schools, learning, you know, what’s sort of the, the place that people are dreaming up right now that AI is going to do something and you’re like, I just don’t buy it.

John Bailey: Oh, it’s interesting. Don’t buy that’s a different I, where I was going to go. I worry a little bit of, just because something done faster doesn’t mean it’s done better. And I know like, if any of the white papers are like, teachers should always be in the loop and teachers should always use their judgment, but teachers are also human. And I think one of the aspects of human is that if you’re overworked and you’re tired, sometimes the fastest response is the one you go with just because you’re just, you’re trying to maximize your time. And that’s one of the reasons we see teachers using like not great instructional quality resources from Pinterest, you know, and from Teacher Pay Teachers and from some of these other websites. That is a problem that exists now that I worry AI will exasperate. You know, if you’re a teacher and say, give me a lesson plan on literacy or reading something of reading in the third grade, you have no idea if that’s based on the science of reading, if it’s based on, if it’s aligned to your curriculum, if it’s adding coherence. And so there, there could be a sense of this instead of really augmenting a teacher’s judgment it could lessen it. In the same way that I think we worry about this with students, that part of the way you learn is through struggle, and struggle comes with not writing a perfect first draft. It comes from the first draft, the second draft, and the iterations and revisions on top of it. And I worry that the moment like students have just have a button that can automatically improve a paper, a paragraph or a sentence, they’re atrophying a muscle that is really critically important for this going forward. And then lastly, you know, we’re in the midst of this national discourse and debate right now about social media and phones and is that leading to more social isolation, loneliness and mental health issues with young people and inject into this these AI tools that I think as much as people say this will never happen, the risk of an AI companion where you’re talking, literally talking to an AI that’s empathetic and warm and adopting Personas and that’s going to be easier than the friction of talking to real life people. And so I worry that there’s a scenario where this AI companions will start leading to exacerbating the social disconnectedness and divide. And that is something that if you look at kind of the headlines that we’ve already had a couple cases with some tragic situations with kids who have committed suicide, I don’t think it was because entirely of the AI, but the AI was a contributing factor in that. And that’s something I think if we want to get ahead of where we are in the social media debate now, that’s something we should be thinking about researching and adding some guardrails to as well.

Diane Tavenner: John, I’m wondering, as you’re sharing these perspectives, how you think about. I guess what’s coming up for me is I feel like the main structures of school and education are still in place. And I agree with you, like the efficiency plays are the first places people go and does AI sort of risk reinforcing the existing model of school and education because it will make it more efficient? So like if teachers were just like barely, barely holding on and now we can keep everything sort of the same but just give them this like boost of efficiency we can keep things the way that they were. And obviously I’m biased because, you know, I want to, yeah, change up the way, pull apart everything but I’m curious just how you think about that, especially as things will unfold over time and like the easy places to start and the asymmetry of adoption too, you know, I mean, not every teacher in America has even ever logged into ChatGPT before. And then there’s some that are like power users at this point.

John Bailey: Yeah, I mean a common theme for both of your works and including over the six years you’ve done the series too, has been, you know, we have this system and institutions within the system that are remarkably resistant to change. And I think what we’ve seen is like technology doesn’t change a system. The systems have to change to accommodate and harness and leverage the benefits of whatever technology or sort of new innovation has been introduced to it. So I’m a little skeptical there. I think you’re going to have capabilities of AI outpacing the institution’s ability to harness that. It’s going to take time to figure out what that looks like and what that means going forward. I do, I come back though to this idea of like it’s access to expertise and I wonder if that mental model starts unlocking things as well, that if you’re a school principal, all of a sudden you have a parent communication marketing expert just by asking it to be that Persona and then giving it some tasks to do. And if you’re a teacher, it means all of a sudden every teacher in America can have a teaching assistant like a TA that is available to help on a variety of different tasks. And going back to what Michael’s point was saying with like Google Classroom, imagine if you’re a teacher, you’re in Google Classroom and you have your TA that’s able to look at student folders and just answer questions. You have. Like, I see like John and Michael really struggling in algebra what are some ways I could put them in a small group and give them an assignment that would resonate with both of their interests and help them scaffold into the next lesson? That was impossible to do before. Like that those three sentences could easily do that. And, and that’s why I think you’re going to see this idea of assistance very much kind of entering not just the education narrative but also the, the more sort of broader corporate landscape as well. Where you see that also by the way, is, is a little bit in how OpenAI is thinking about the pricing for this. There is an OpenAI model. Most people probably didn’t see it. The most robust, smartest and the one that has the most reasoning and they’re charging $200 a month for that. And most people are like oh my gosh, like I would never pay $200 a month for software. And that’s because it’s the wrong way to think about this as a software. The way to think about it is will you easily spend that much on a consultant or in a part time staff person. So OpenAI is even adopting almost like a labor market pricing strategy or the expertise that they’re giving you. And so I think this is an amazing thing for schools to think about at time of tight budgets is, you know, again, if you want to maximize your teachers, how can this fill different types of labor market roles in the education system to enhance and support teachers in the limited staff, given budget tensions that are going to be coming out in the next couple of years here.

How AI Is Changing the Skills Landscape

Michael Horn: It’s interesting hearing you say that and draw that analogy, John, because actually Clay Christensen, before he passed away, one of the big interests he had was how do you scale coaching models in education, in health care, in lots of these sort of very social realms as the recipe, if you will, for sustained behavior change and success and things of that nature. Never got to really dig into it and write about it. But as I’m hearing you talk about this, it suggests that maybe a disruption of that might be afoot. I guess that’s the question I want to lean into though, as well, which is you named a few things that this could hurt. And so the flip side of it being a great coach is that it might take away social interaction. Or you talked about essay writing and that, you know, actually the learning is in the process of doing it in revision and sort of pushing the easy button, if you will. Right. Jumps you ahead to the product, but not necessarily the learning and the struggle from it. I guess what I’m curious about, and I’m going to borrow an analogy that Brewer Saxberg, former chief learning scientist, I think was his title at CZI Chan, Zuckerberg Initiative and you know, Kaplan and K12 and a variety of places. He talked a lot about how Aristotle back in the day worried a lot about as the written word became a thing, that people weren’t going to be able to memorize Homeric length epic poems anymore. Aristotle was absolutely right. And I don’t know that we regret the fact that most of us.

John Bailey: Speak for yourself, Michael.

Michael Horn: Two of the three here could do it, but I, so but the question I guess would be, you know, of these things that might hurt, which are really going to, are they still going to matter in the future or are there going to be other things that we, you know, other behaviors or things that are more relevant in the future? And how do you think about sort of that substitution versus ease versus actually like really, you know, frankly, I think when you talk about social interaction that could be, forget about disruptive, that could be quite destructive.

John Bailey: Yeah, no, it’s, it’s a great question. It’s a good point. It’s also this is an area where some of the best studies of this are happening in the labor market and looking at like, how is AI changing? There was just one study I was just reading today with Larry Summers and Deming from Harvard that are looking at, you know, AI, one of the things that they’re finding is AI is chipping away at some of the entry level jobs. It is for the same reason that, you know, you don’t like, if I’m in Congress, now all of a sudden, I don’t need an intern to just summarize legislation. I have something could summarize it for me better in five seconds. And that actually hurts that intern because they’re not developing the skills of reading legislation and analyzing and summarizing it. But it also means the other thing that they’re talking about in labor market sort of terminology is that it’s really raising the skills for those entry level jobs. Now you’re not expected to summarize, now you’re expected to do more and a higher level cognitive functions with it. That, that’s interesting. But I also mean that’s going to place a huge strain on our education system. Like if you’re looking at just the results of TIMSS and NAEP and where kids are, they’re not in that higher cognitive function in terms of being able to ask those questions or do those capabilities. And so in many ways I think if this is going to change the future of work and going to raise the level of what’s expected, that’s going to put more strain on our education system to make sure that we get kids that are capable of doing all those different things. I think about that with myself. Like I’m not like, there are many people who are Excel gurus, very good at analyzing data and they do P tests and other things that statistical things that are very important and I would not be able to do. And this was one of the first experiences with code interpreter, with OpenAI is that all of a sudden I had again an expert, a data analyst who could do that for me. But what that meant is that for work I can no longer say, well that’s not something I can do. Now I could do it because I had an analyst that could help me with it and that in some ways don’t tell my employers this, but like now that could like raise their expectations for me as well. But I have to get smart on the type of questions and the type of direction to give it in order to get the answers that I can use to synthesize into some sort of response. So anyway, I think this is going to be a very messy way. It’s going to change the labor markets, but it feels like it’s lowering the floor in many respects and access to these higher cognitive tasks, which in turn then raises expectations in a lot of different ways. And that’s very powerful. But it’s also, I think it probably a huge strain on our human capital systems. Did I answer your question?

Michael Horn: Yeah, I think it does. Before I think Diane has another set of questions. But before we go there, just one quick follow up, which is it strikes me that then you knowing that you can ask those sorts of questions and sort of having a sense of the contours, right of like what are relevant questions, what are. What is knowledge base that is out there, that I could ask this in meaningful ways and how to structure it. Like those are topics that I might not need to know all the mechanics of how to do it, but I need to know that they are questions that can be asked and, and the relevant place to ask them is that a鈥here am I on or off on that?

John Bailey: Yeah, I think that’s right and also again, this is where AI is amazing. Like you could give it a spreadsheet and say what are 20 questions you can ask with this? Or give me 20 insights that you glean from it if you don’t know where to. Like I’ve started again, treating a lot of AI people will tell you not to do this, but if you treat it, if you treat it a little bit, almost as if you’re talking to a person, it does unlock a lot of capabilities. There’s risks of doing that. But also I just find sometimes like I want to do X, like give me the prompt in which to do that or I want to do Y. Like what are. Ask me all the questions you need to be able to answer that. And then it asks me 10 questions and then spits back an answer. I just helped someone with, she’s coming up with a name for her social impact advisory firm and so we created a little GPT and AI assistant that was a brand advisor and it asked her questions the way a brand advisor would and then it spit back 20 names and one of them she’s going with. And so that’s like incredible. But again, she had expertise that could ask questions and facilitate a conversation to unlock some of her thoughts and preferences and then spit back an answer from it.

The Interplay Between AI and Policy

Diane Tavenner: So much there especially given my current focus of sort of 15 to 25 year olds and who are going to be intensely impacted by, I think every, are already intensely, I think impacted by everything you’re talking about. I want to flip over to policy and I want to come at it from the angle of, you know, most people think about AI policy around safety and you know, what are we controlling and what are we, you know, protecting people from, et cetera. But let’s come from the other direction that you sort of introduced a little bit ago about the structure of education in schools. We’ve got some pretty interesting policy movement happening in education right now. We are seeing the rise of ESAs or educational savings accounts, which, you know, puts money in the hands of families to spend it where they want to spend it. We’re seeing a lot of states adopt sort of portraits of a graduate or graduate profile that are these more inclusive, holistic views of like what someone should be able to graduate knowing, doing, being able to do and an openness to how they actually get to that place and the different pathways. Talk to me about like those things going on sort of in the policy world and AI happening over here is that kind of the intersection where we could sort of start seeing some structural differences. And again like a more user centered approach to educate, you know, a student centered approach potentially. So I’m curious your thoughts there.

John Bailey: No, I think it could, I think it’s a yes. It’s a yes, but in some ways the yes is, you know, I think there’s a whole class of ways of using AI that is about navigating and navigating really complex systems. And ESAs are one of those. And I think, you know, I. One of the first GPTS I built on OpenAI to demo this was, like if you go to Arizona’s ESA, it’s like two websites, there’s a weird random Excel file of expenses and then PDFs that like a 78 page PDF. And again that was the best that team could do with limited resources and also with the limited technologies. And I just put that into a GPT and all of a sudden it was a bilingual parent friendly navigator. And if you said can I use funds for Sony PlayStation? It didn’t say no, you’re a terrible parent. It used warm empathetic letter answers to say like no, you can’t and here’s the reasons why, but here’s what you can do. And it was all conversational. And I think this friction of dealing with education systems and education policy could be immensely improved by using AI. Another example, I have a friend, she has kids in a school district and they send these terrible absentee reports and I say terrible. It’s like her daughter’s name is capitalized. So it’s like shouting. And then it’s like has missed six days of school. It’s very, it is reading, reading like a hostage like script. It’s like your daughter’s missed six days of school. It’s very important for her to go to school. We are here to help you. And then it does this weird bar chart at the bottom that’s like meaningless and like I just gave it to ChatGPT as an image and say make this better and give three questions a parent could ask their kid for why they might be absent. Amazing. It was like. And that I did in an Uber ride crossing the Key bridge in Washington D.C. like, you know, that’s an amazing set of powerful tools that can remove friction and help improve the system to make it work better for parents and for kids and also teachers and administrators too. So the but on all this is like, I think that’s going to be powerful and it’s going to make policy easier. I’m still, until we create more flexible ways for teachers to teach, for students to learn and students to engage in different types of learning experiences, I just think we’re going to end up boxing and limiting a lot of this technology capabilities. On the portraits of a graduate. I do think like again, an easy navigator on this is to take student work and student interests and student grades and say I’m not really sure where to go, like help me, Ask me the 10 questions I need to figure out. Should I pursue an apprenticeship program, a two year degree or a four year degree. It feels like again, we’re very close to being able to do something that, you know, it may not be perfect, but it’s much better than what the vast majority of students have access to right now. And if it helps them make a better decision in this process and pick a better path that’s based on their interests and their passions and their skills and their abilities. That’s great. Like we should do everything we can to help maximize that.

Diane Tavenner: Awesome. Maybe just to round out anything. What policy do you think we should be keeping our eyes on as we focus on education in relation to AI? What should we be worried about? What should we be thinking about? What should we be paying attention to? I know you spend a lot of time thinking about policy.

John Bailey: I do, yeah. A little bit. A little bit of policy. So one is that Congress is going to move very slow. We thankfully though, in this day and age of such polarization in so many of our politics, there are two remarkable bipartisan roadmaps. One from the Senate, Senator Young and Senator Schumer introduced. And then there was a House report that got reintroduced right before break that is also bipartisan, remarkably good. It’s 218 pages and they have a lot, I take great comfort in the fact that there’s a bipartisan, durable consensus. It’ll take time to enact that. That’s okay. It’ll take time. At least we have a little bit of a pathway on that. The thing I think for most of your listeners to really pay attention to is what’s happening at the state level. And there, I mean, just last year we saw close to 400 something bills that were introduced at the state level. Everything from dealing with deep fakes to copyright issues to regulating the models themselves. The most famous one was in California. And those don’t on the surface look like they have anything to do with education, but they do. If that California bill had passed, that limits in many respects the types of models that would be available for teachers and for students. There’s another bill, similarly in Texas right now that’s being debated. And so I think we need to pay more attention to what’s going on at the state level because that is going to either restrict or enable access to a bunch of these different types of tools in the models. I think, Diane, you had mentioned too in one of the previous questions, like most people haven’t used ChatGPT, and I think that’s exactly right. But I think what’s going to start happening is ChatGPT and Google Gemini are going to come to where people live already. And you’re seeing that with ChatGPT being integrated into Apple’s iPhone, that, you know, I think for the vast majority of people in the country, their first experience of ChatGPT is going to be through their iPhone. And I think for a whole other set, especially teachers, their first experience is going to be using one of the AI tools on Google. And that’s okay. But again, what’s going to either restrict or expand access to those different types of tools are going to be these laws that are either restricting or adding more scrutiny to the models themselves. And what I will say there is, I don’t think anyone’s cracked the code on how to best regulate this. Whatever policymakers think they have the models improve or they’ve done something that they didn’t think was possible. And for the longest time, policymakers are like, we have to restrict these powerful models and it’s based on computing with some astronomical number. And then on December 24, China announces something called Deep Seek that is pretty much as good as ChatGPT4 and Llama3. And they did it with far less computing power. And so that would slip in underneath as like an exception. And I think policymakers are really wrestling with the best way of thinking about this and restricting it. So anyway, I would do more of that. You’re going to see a lot of other attention to AI literacy. I tend to be. I think these literacy efforts are great, but I have lived through, we need tech literacy, we need media literacy for everyone. It has felt like it. This is by no means to disrespect folks that are approaching this that like every new technology gets attached to literacy component to it. It is not really clear we got much from tech literacy back in the 2000s or some of the other things. And so maybe there’s a way to make sure that we get right what we got wrong before. But I don’t think that’s going to be the quite the silver bullet that we need it to be.

Diane Tavenner: I think that’s right. This has been really such a good way to start. Michael, do you have anything else you want to.

Reading, Listening, or Watching

Michael Horn: No, let’s. Thanks, John. This has been a really tremendous overview of a number of currents that I know both of us have been making notes on the side as you’ve been talking and we’re going to want to dig in more. Maybe let’s pivot away from the topic that we’ve been delving in as we wrap up here and just, John, what have you been reading, listening to, watching outside of the AI education conversation? Hopefully AI is not dominating every single thing. Although I won’t be surprised if you give us some movie or fiction or something like that with AI coursed in its veins. So what’s on your list?

John Bailey: Oh my gosh, what is? Unfortunately, it is like, it’s not unfortunate. It’s just I have. I found myself waking up at like 5am like 2 years ago just thinking about this. So like all of a sudden you’re reading books on, you know, intelligence and human expertise and human psychology because you’re trying to understand like intelligence and what is, what makes something intelligent and that. So anyway, that’s nerdy stuff. The new Henry Kissinger book with Craig Mundy, the Genesis book has also been good. I’ve been reading David Brooks’s book How to Get to Know Someone, which I sort of have missed the first time it had come out. But I think also it has an AI play too because that’s trying to get to know the essence of someone and the humanity of someone. And so it’s been great kind of reading through that in light of kind of everything that’s happening kind of around then what am I watching? I don’t know. Some great series on Netflix,  the Lioness. Yeah, it’s good. Oh, and all the Landman too which has also been quite good. Coming out of Yellowstone.

Diane Tavenner: Cool.

John Bailey: I don’t know.

Michael Horn: That’s good. I’m impressed with your range. Diane. What’s on your list?

Diane Tavenner: Well, my new exciting project for 2025 is we are planning a trip to Greece. And as Michael knows, when we sort of plan these trips, one of the big parts of it is spending like six months reading and learning and exploring before we go. And so I actually had a conversation with ChatGPT like you have advised John. When I flipped to just talking to it like a person changed everything to structure a reading and listening list and like all the things I’m going to do. So I have started in on that list that we co constructed and built together, which is pretty awesome. With the Greeks by Roderick Beaton. And this is on the nonfiction side. I have fiction too, but this one rose to the top because I really asked Chat to say I, I need you to find history that’s like engaging and that’s going to keep my attention and you know, give me all the, the way that I want history, the sort of the big swaths and so, so far so good.

Michael Horn: Very cool. Very cool.

John Bailey: One other thing, this summer when I did a vacation, I actually created a GPT with all, the travel itinerary, the PDF and everything else into it. And then it was awesome because I could just ask it questions, but it would give me, it would also speak phrases if I needed it to.

Michael Horn: Oh that’s next level, that’s very cool.

John Bailey: It was kind of, it was just kind of a fun little, little thing. But I’ll share the prompt with you later. Yeah, yeah.

Michael Horn: Because we used it for itinerary planning for, for all the different interests in our group, but did not jump to that level. John, that’s, that’s a good one. Mine has just been a book, so I feel boring compared to you both. I polished off Israel: A Guide to the Most Misunderstood Country on Earth by Noa Tishby, which has remained in my mind quite heavily. And so I highly recommend it. I thought it was quite good and quite humorous and quite engaging the way she wrote about it. So I enjoyed it. And that’s what I’ll, I’ll recommend for folks, and I think we’ll wrap there. But John, huge thanks for joining us again, kicking this off with a lot to chew on for all you listening right in with your questions, thoughts, things that are on your mind coming out of this conversation. We’ll look forward to the next one on Class Disrupted.

]]>
Class Disrupted Tackles AI: Exploring its Application for Teaching and Learning /article/class-disrupted-tackles-ai-exploring-its-application-for-teaching-and-learning/ Thu, 23 Jan 2025 11:30:00 +0000 /?post_type=article&p=738824 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

At the outset of an AI-themed season, our hosts take stock of their prior assumptions, hopes, and concerns about the technology鈥檚 applications in education. They dive into where they see it being used to make adjustments to the current educational model and envision how it could be applied to revolutionize learning. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. Good to see you.

Diane Tavenner: You too. I spent the weekend on a tradition I think we have talked about before, which is we hold a holiday party every year for what are now old friends. Because I think this is our 27th annual, if you.

Michael Horn: 27th annual. Wow.

Diane Tavenner: Yeah. And it just, it makes me appreciate longevity and just like I have such gratitude for deep, long relationships that have built over time. And yeah, it’s just really, it’s a good fill-me-up for the moment.

Diane and Michael’s AI Priors

Michael Horn: Yeah. That’s amazing. We’re obviously recording this as we approach the holiday season, if people can’t figure that out from that intro. That’s an amazing place to start. And the gratitude you have around that, Diane. So very, very, very neat. Let’s lay out what we’re doing for folks today.  And as we get into a little series on the topic that we talked about in the first episode back, which is artificial intelligence. You want to lay it out, Diane, what we’re thinking?

Diane Tavenner: Yeah, I think as folks know, like, we are now following our curiosity and we’ve been doing that for a while. And, you know, I don’t think either of us are just like 100% all in on AI, like huge evangelists. And I do think that we’re at a minimum, cautiously optimistic about the possibilities of it. And so we’re just curious about it. And I think we find ourselves kind of talking about it and asking about it. And so we are going to do a little exploration. We’re not exactly sure. We’ve got some ideas of the format and whatnot. We’re not exactly sure how long it will last for, but we thought we’d just kick off today with where we’re starting that exploration. And I think I, personally, I think you’re with me. I hope I end in a different place, quite frankly, I hope I end in a place where I’ve, like, learned some stuff and talked to interesting people and, you know, maybe think a little bit differently. Hopefully smarter than I am now. But today we wanted to just kind of lay a foundation of where we’re coming from based on what we know so far.

Michael Horn: Yeah, love, love that intro. And what I would add is it’s obviously a hot topic in education. Everyone knows that. But I think what’s also interesting to me anyway has been how OpenAI and Google and, you know, Facebook, like, or Meta, I should say, whenever they talk about AI, they seem to show education use cases is like a major part of all their launches. I’m sure that’s not quite right, but it’s more than I can remember on most product launches outside of maybe the iPad over the last 20 years. And so it’s obviously getting a lot…education and AI together, Diane, are obviously getting a lot of attention and I find myself anyway, and we’ll talk about this in a moment. Like I start out with a strong prior and then I read a couple things and I completely flip my opinion and then I have that opinion and I talk to someone and then I change again. And so like I find myself pretty malleable still. But like you, it feels like this technology enabler that could be really, really intriguing. And we need to explore more on.

Diane Tavenner: I agree with you and I think we’ll do that in a way that we always do. We’re always looking for sort of third way solutions that are very practical and very pragmatic and very connected to what’s actually happening with young people in schools, with teachers. And so yeah, I think that, you know, people might be like thinking, oh my gosh, more AI. But I hope that we’re going to bring a, a sort of pragmatic approach to it that that is actually useful for people. 

The Teacher- v. Student-Centered Approach to AI

Michael Horn: Yeah, no, perfect. And I will tell you, when you visited my class and showed off Futre with the students, they noted that you never mentioned AI in your talk. So we are certainly not leading with AI, but we think it’s intriguing. And so against that let me start out with the opening framing I’d love to propose to you and then you can sort of react to how that framing sits. But it’s one that I’m stealing from a friend of ours in the venture world. And it’s something though that I’m noticing in the field and I don’t know that everyone who sort of is launching AI education products notices it this way. But what I’m seeing is that there’s sort of on the one hand a lot of AI startups and AI approaches that are very teacher centered or teacher facing as their entree, if you will, into the classroom or learning environments. And then on the other hand you have the student centered or student facing applications. This might be like the Khanmigo or, you know, some of those things that we’ve seen out there. And so there seems to me to be a bit of a dichotomy in terms of the startup space, the investors approach, different entrepreneurial approaches, even teacher, frankly and school designer and educator approaches on how they’re thinking about AI. Is it first a teacher tool or is it a student facing tool. What, what’s your take on that framing before we dig into each side of this?

Diane Tavenner: Yeah, so I think that sadly, and I will say sadly for me, I think most people are thinking about it from a teacher facing approach. And I think I sent you an article the other day or an op ed where I was like very frustrated with the premise, which was this exact premise. And I, as you know, I fundamentally disagree with that approach. Do I think we should be using AI as a tool to support teachers and to support students? Yes, but I think we’re just retreading the, the old way of thinking about schools. And let me just start, Michael, and say, like in this conversation, I’m almost exclusively going to be talking about high schools because I think elementary schools are quite different. And, and you know, so if we get into elementary school, let’s note that specifically. But for me, I’m very much thinking about high school, maybe middle school as well, but older students, and I just think that the world is going in a direction for many, many, many reasons where they need to be owning and driving their own education. Of course, this is not unique for me. I’ve been doing this for a couple decades at this point. This is my fundamental belief. There’s such a downside that we are not focused on how we enable students to own and drive their own learning. And AI is such a game changer. I think potentially in this direction it can help us do things we’ve wanted to do and can’t do. And we’re completely missing the mark when our total focus is on the teacher and how this is a tool that we’re going to build for teachers.

Michael Horn: No, that’s helpful. And right out of the gates. We know where you stand. I’m going to try to make the argument for the teacher facing up front and then you can throw cold water on me afterwards if you’d like. But, but let me try it. And, and maybe the way I will try it though is more to explain it about why I think the phenomenon is happening. And so number one, I would say on the why is AI better for teachers than students? Say false dichotomy, but let’s go with it. I think part of the approach is, look, AI hallucinates all the time. It makes mistakes. And these tools are better in the hands of experts rather than novices who can, you know, catch those mistakes and correct them in some ways. So number one, there’s sort of like a risk aversion approach to it. And so, and I think this, you know, we could probably contradict this in certain ways. But I think the AI like as risk to students is maybe driving some of this. Number one. Let me quickly add on that that I do think that there is something to it in the sense of AI when used by Amazon to get you to buy something that maybe you’ve looked at online. If they move the dial 0.001% that is serious dollars to their bottom line and if they alienate you, they don’t really care. Right. Whereas in education I think the argument would be if we actually mislead a student or you know, tell them a narrative about themselves that is going to mislead them in some, you know, like we could do deep damage to their self efficacy and, and, and, and sense of self and even their agency right down the line. And so that’s the reason for a teacher facing perspective maybe. Let me pause there before I go to the other two reasons because I, I that like a meaty set of claims that I think you should engage with first.

Diane Tavenner: Well, I think you’re uncovering one of the challenges that we have in education right now, which is a comp. Just a real lack of imagination about what is going to be possible because of AI. And so I think that many people, most, I don’t know, a lot of people at this point have logged on to you know, ChatGPT or one of the others and they’ve typed something in that little box maybe a few times and they’ve had or they’ve read articles about these hallucinations. But in many people’s minds like that is what quote AI is. Maybe some people now are playing with NotebookLM from Google. And you know, one of the really amazing things I think is that you know, it will produce a podcast.

Michael Horn: It’s pretty remarkable. A little over engineered but pretty remarkable.

Diane Tavenner: And it is like at first it’s like pretty mind blowing and then when you actually start to listen, yes, it’s getting all the right words. I did it the other day. Someone like loaded a chapter from my book into it and then it produced a 22 minute podcast. Man and a woman talking. And I was like. And they were like is this the conversation you and Michael would have had about your book? You know and like there’s pieces of it, yes, but it’s not us, it’s not human. It is, it’s like literally going, it’s read what is on the page and then it’s like making it, sort of bringing it to life. But there’s no thinking and nuance. And dynamism there. Anyway, all my point is that that’s not, you know, that product is the, is one of where they’ve taken what’s underneath it, the AI and they’re actually turning into something that is more user facing. So my assumption is that we’ve only just begun to see what’s possible. And so this idea that like, is that like chat box going to revolutionize learning for kids. No, it’s not. But that’s not what we’re talking about here. We’re talking about as a tool embedded in really well designed experiences in my view, products that will move the needle. And so I think you, you minimize or eliminate those risks that you’re talking about when you build it in thoughtfully. Certainly that’s what we’re doing on our team and so. 

The Sustaining Innovations of Teacher-Centered AI

Michael Horn: Well, and you’re leading with the product design as opposed to the AI, which is also a difference. Right. So let me say the second reason. I think that we’re seeing a lot of teacher facing things, which is that frankly, relative to today’s classroom, it does not require redesigning today’s classroom. It is in our language, a sustaining innovation relative to today’s classroom. And let’s be honest, that’s where the market is, right? As in, if you’re looking for volume, it is not in. I mean, yes, microschools are taking off, but there’s still a small percentage Of the education landscape. Certainly in the US, even more so in the world. And so teacher facing sort of as gateway in teacher directed instruction is where the market is. And frankly most VCs, when they enter a market, they have a five to seven year time frame to get out of the investment. They’re looking for unicorns within that. And that pushes you to where the dollars are, not where perhaps the puck should be going. So I think that’s the other thing driving this dichotomy, if you will.

Diane Tavenner: I think you’re right and I think this is the problem we consistently have every time we think something might help us transform schools, right. Is that the, it gets, it get the gravitational pull back into the box, the box of the school, the box of the classroom, the box of the teacher, the box of the course, it just, all the pullback to that is so strong and every time people try to unbundle it or disrupt it, we’ve had many conversations about that. You know, there’s a few outliers who sort of make it outside of that, you know, planet, orbit, gravitational. I spent a lot of time with a lot of them last week and it’s very exciting and inspiring with them and then you get back to the mass market is all still living inside that box. And so, so I mean, this is where I just feel like, I feel like I can’t help but get hopeful and excited, but I’m a little bit worried that I’m going to get my heart broke yet again about the potential changes that we might see. Because that’s what I want to have happen. I actually want to break apart that model and change this to be a learning experience, at least at the high school level, where kids are truly driving their own learning and learning in ways that are much more customized and personalized for them. And let me just be super clear, that does not mean they’re learning alone. This is still very group oriented. It’s actually quite real world oriented and that’s what I think is possible. So.

Michael Horn: But it’s not to say, let me just modify this before we jump to where you’re going, which is, I think you’d agree, there are plenty of low hanging fruit use cases to like to, to improve. Right. Teacher practice with AI, whether it’s better lesson plans, more diverse ways of reaching different student needs, et cetera, et cetera, et cetera, frankly, assessment, probably to get more real time information where your students are or how they’re doing to. Or simplify a teacher’s workflow. 

Diane Tavenner: Yeah, and that might be the middle ground here. You know, I was the other day, I was sitting there thinking through how can we disaggregate the role of the teacher and what does AI enable? This could still exist in the box model of class, you know, but I do think it would be an improvement. So if we think about all the hats a teacher wears, which are impossible. The job’s impossible as you know, I know everybody knows the feedback we’re getting from the market is it’s impossible because no one wants to do the job anymore. People will get mad that I said that. That’s not true. Some people want to do the job, but here’s the job. So one, you are. And these are the main things that teachers think about and people think about. You’re planning your curriculum and your lessons and you’re delivering them. There’s a real argument that AI, that a single individual teacher should never be planning their own curriculum again, ever, ever, ever, ever. It’s like not time well spent. It will never be as good as what can be done, you know, more globally and with all the learning science and expertise that we have, and even quite frankly, the delivery a lot of it is not personalized and individualized. So that could very much be, you know, AI driven, technology driven. Then there’s feedback and assessment.

Diane Tavenner: So I’m giving you like feedback. I know you’ve been grading some papers and assessing work. And again, like, again, we’ve done this for a decade plus at Summit where we took most of that off teachers’ plates. And the technology is absolutely capable of doing this now and better quite frankly, than humans. And so if we take that, that’s like the core of what most people think the teacher’s job is. So what’s left? And it’s the very human things, the things that I would argue matter. It’s the coaching and the mentoring of students. It’s helping them to figure out how they’re going to like sequence their learning pathway and what comes next and what happens when they get stuck and they need actual help in where they’re going. And so that coaching, that sequencing that, facilitating certainly a role in facilitating group learning and really cool real life learning experiences and giving real time feedback in those settings. There’s the social-emotional part of this. Like how do you, how do you become a person who understands a morning routine and actually, you know, knows how to manage your emotions and your relationships and all of those sorts of things. And then of course there’s like custodial care. That’s for younger, but to some extent older. Yeah, none of those things can be disrupted by AI, I do not believe. And for a lot of teachers, it’s the stuff that brings them real joy and it is really impactful for young people. So I think maybe the in between is a disaggregating of that role of the teacher.

Diane Tavenner: If I saw products moving in that direction, I’d be happy.

Michael Horn: So that that would be. So for all those listening, that’s the sustaining path we would like to see happen. And here’s the disruptive argument. Let’s get student facing here. Right. And student centered. And I think that is the argument. Right. 

Disruptive Applications for AI

Michael Horn: Is that yes, tutoring today or student facing tools. And I’ll get into the second use case in a second. But like the more narrow ones first. I’ve seen all sorts of critiques and I think we’ll get some of them on, on the podcast as we go through this series around how it’s not, you know, it’s yes, maybe procedural knowledge, but not like the in depth, really emotion driven. Right. Learning pieces and other things of that nature and it makes errors, you know. Right. All the rest. The Wall Street Journal has done a few hit jobs on things and so forth. But if you get into non-consumption where the alternative is nothing at all, I don’t have access to a tutor if I’m, you know, however many millions of kids in the United States, let alone the world, clearly better than the alternative, nothing at all. There are some very interesting places to launch student facing applications in that area, number one. And number two, I think the argument for it, and I think this is where you also might be going is I see it as lifting the quality of work of what students are doing because AI now is a tool of work just like we use it in our workplace to better…so that they can create more in depth, more exciting, you know, things. Right. With spending a little bit less time on some of the mechanics and more time on the depth, if you will, of learning and evidence right in the product or performance or whatever they’re creating. And I’m being somewhat vague because trying to capture all the possible use cases one could imagine depending on what subject or grade you’re imagining as we’re talking. But I think that’s the other area is that like the sense of agency for kids where they can actually build professional level skills stuff as they’re exploring.

Diane Tavenner: Yeah.

Michael Horn: Has just taken a big step up. And it’s not to say that they don’t have to learn the knowledge and application and skills. They do. But then using AI to level up all of that is pretty interesting, I think. Go ahead.

Diane Tavenner: Yeah. Let’s talk for a minute, Michael, about the broader context and why – because I think it’s so relevant here about what’s going on – I think in the world why this matters. So, so number one, it’s unequivocal. I just spent last week with people from the left and the right and like everywhere in between. And there’s a, there is an incredible agreement around the idea that school needs to be real world. It needs to be preparing young people, especially high school, for the real world, for jobs, for employment. It can’t be sort of this like theoretical, you know, thing anymore. And it’s not preparing them for that. It’s not preparing them with what I would just call basic professional skills. Like how do you actually like be an employee? How do you show up on time, how do you have agency, how do you do these things? And, and it’s, it’s not actually, if it’s not incorporating AI and how you use that in real work, it’s not going to be preparing them for the future that they’re walking into. And so I think that is happening. There’s a real move towards, you know, CTE, you know, career and technical education. As we know, we’ve got ESAs coming on in multiple states where people are going to be able to sort of more pick and choose their education. So you’ve got a lot of stuff happening where people are like, I don’t want to sit and get anymore and it’s not going to serve me to just sit there and take direction and then wait for you to tell me the next direction. And so I think what, you know. Do I think it’s a chat bot that’s tutoring me? You know, I think that’s super rudimentary. I think there’s so much better stuff coming, but you got to start somewhere. And I think that what’s more important to me is that like it’s, it’s breaking this dynamic of like 25 or 30 kids in a classroom, like waiting on instruction and the slowness of it and the exactitude of it and that. And so it’s moving us towards like this is the world we’re going to.

Michael Horn: The waiting on is a really particularly interesting place I’d love to like pick up on because I see the same thing, no surprise perhaps in that one we’ve been pretty clear that more connection to real world is important. I also think the ability to codify and create like standard curriculum, given the fast changing nature of real work is going to be a fool’s errand. And so that pushes you more and more in the direction you’ve been around. Experiential. Right. And so as a result of that, like that’s going to be doing, which means not like you can’t be waiting on the sort of the one scarce resource in the classroom to come over to you, unlock the lesson plan for you and then you’re allowed to go learn that. That that’s not going to be the model that engages or works, frankly. And so it’s everything from knowledge acquisition to exploration. On the one hand we’ll put that like as a big bucket, right? To actually engaging with, connecting with and then doing the work. And AI is a really interesting portal, I think, into all three of those, I guess is the way I would think about it. Whether it’s up leveling the quality of resources on the front end or frankly up leveling the level of work that young people are able to do and they’re showcasing of that and problem solving to real professionals and getting real feedback on it.

Diane Tavenner: Well, and I think this is so critical, Michael, because one of the things we’re seeing in the job market for, you know, post high school graduates, post college graduates is. And one of the things AI is doing is, is sort of competing with or removing those kind of entry levels. So no one wants to hire someone who doesn’t have experience anymore. You, you know, almost every job says you need a couple years of experience. So how are young people supposed to get experience? Well, their education is going to have to incorporate experience, if you will. It has to be experiential. It has to be a place where they’re going to be able to make the case that even though I just finished learning in some, you know, degree or credentialing program, I have experience. And so the, the act of learning and getting feedback and producing products has to be much more real world experiential if they’re going to have any hope of getting a job. 

Preparing Students for Success in the Workplace

Michael Horn: This isn’t an AI point, but I just, I’m, I love that we’re getting away from credential based hiring and that skills based hiring is a phrase, but I think I find it overly technocratic and a sense that we’re going to be able to define skills in narrow ways. And the word you just used, experience, to me is the way to think about it of experience based hiring. And the way you show you can do and step into a job is through the experiences you’ve had where you’ve done that. And if we believe, let’s go to the equity question. If we believe we want to give everyone a chance at that school has got to be providing it because otherwise my kids are going to be able to find those opportunities, but a lot aren’t. And so I think schools are going to need to be. A long time ago there was a professor, either I think UCLA, but maybe USC, and you can correct me if I’m wrong, and he wrote about how like schools of experience were the right way to hire people to see, like, you know, have you led teams, have you, etc., etc. As opposed to like, gee, Diane built a great product by herself, now we want her to be a manager. Two totally different sets of skills underlying that. Forget about naming the skills, let’s just look at the experiences themselves and say, like, how’d you do what, what lessons did you learn? What would you do next time? How does it equate to the culture here? Those are the sorts of questions and conversations I’d love us to be having in hiring. And so what you Just said, I think makes a lot of sense for the schools to be stepping into that. And the challenge, right, if we stay with our teacher centered model is that to ask teachers to sort of be the font of all of that is, is crazy.

Diane Tavenner: It’s not even, it’s not even possible by definition. You know, they are, they’re, they’re waiting on, the student is waiting on instruct. It’s not preparing them to be productive. No, no. And it’s not even neutral anymore. It’s negative because the incentive system in our traditional schools is actually counterproductive. It’s creating behaviors and incentivizing behaviors that are, are counterproductive when you’re going into the real world. And so, and I would argue the learning isn’t even that great. So it’s not like they’re coming out as masters of math or you know, and, and on top of it that. I mean, let’s just go back in time for a moment. We talk a lot about the industrial model and wanting to move away from industrial model schools. But I think some of the things that people forget is the design of the industrial model school was actually preparing people for…

Michael Horn: An industrial model economy.

Diane Tavenner: The factory. Like you showed up to a bell, you moved on a bell, you, you produced work at a rate and a speed in a way that was going to be very real world, very comparable to what you were going into. And our schools look nothing like workplaces at all anymore. And they’re not preparing young people for all of those pieces of it.

Michael Horn: No, we’re going off AI but I’m going to make one more point and then maybe we’ll bring it back. Which is, I actually think when people think about higher education also, and they’re like, oh, the rarefied university experience that I want all 55 million people for some reason to have, that is, you know, Harvard or whatever else, they forget that that is also a vocational experience, which is to train people for the professoriate down the line to prepare them to get master’s and PhD degrees. So for what it’s worth, I think it all has echoes to your point of the world into which you were trying to prepare individuals. And that world has changed totally.

Diane Tavenner: And I think that AI becomes a tool because I think a lot of the objections to changing this model, if you will, the box model, the classroom, the school building, etc, has been like, how can we actually do that? We have, you know, 55 million people in the schooling system. There’s a huge operational component like, how do you actually do that? And I do think that AI brings us a new set of tools in a very meaningful way if we deploy them properly. Not properly, if we deploy them, you know, interesting, smart, visionary ways that make that more and more possible.

Michael Horn: Maybe let’s leave the conversation there and I’ll put out one more question that I’m really interested to get from folks, which is when…We’re going to talk to people who are skeptics, who are optimists, probably in between. And the questions that I’m curious about are many. But one of them on what you just said is like, how does it maybe make certain things that we thought were important historically less so in the future? Like, yes, it might ruin the ability to do X, Y and Z, because AI is going to do it. But also that thing is no longer that important either as an artifact anymore. And where is that not true? Where is it going to ruin that thing that actually still really is important? How do we think about that? I’m curious to hear what people think.

Diane Tavenner: I’m curious about that too. I also will just put an invitation out, Michael. You know, we’re gonna do this for a little bit and we’ve got certainly a list of people we want to talk to and a list of questions. But we always love hearing from listeners. And so if there are people or questions you are curious about, send them our way and we’ll do the best we can.

Michael Horn: Perfect. Ok so let’s leave it there. Lots of, lots of energy around where we want to see AI solve problems. And let’s flip, as we always do, to what we’re reading, listening, watching, basically anything outside of our day jobs. What’s on your list, Diane?

Diane Tavenner: Well, I have one that’s legitimately outside of my day job, which is The Diplomat Season 2. And it’s just, that’s so bad.

Michael Horn: I need to get on that train. I really, for a variety of reasons, I know I would like it, so I will try to catch up to you. Mine is less, is, is not actually divorced from my work. I’m reading student papers non-stop right now. The AI I’ve tried a couple AI tools, Diane, that grade, I will tell you they don’t because they don’t understand context and the content knowledge. They’re very good at telling me, you know, grammatical things. I am not an English teacher. I don’t, I literally don’t care as long as I, it communicates the point in this particular case. So as a result, it’s still manual labor for me, for the next few days.

Diane Tavenner: Well, I’m so sorry. I hope that ends.

Michael Horn: No, all good. Some of them are great ideas, and I’ll hold to those. But for all of you listening, thanks as always. We look forward to hearing from you. Look forward to hearing your thoughts about who we ought to talk to, what we ought to learn from. We’re excited to do this and do a deep dive on AI with all of you. Thanks so much. And we’ll see you next time on Class Disrupted.

]]>
Podcast: Class Disrupted Hosts Return with Job Moves & Insights for K-12 Schools /article/podcast-class-disrupted-hosts-return-with-job-moves-insights-for-k-12-schools/ Wed, 08 Jan 2025 15:30:00 +0000 /?post_type=article&p=737897 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system in the aftermath of the pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

As Diane Tavenner and Michael Horn launch a sixth season of Class Disrupted, they talk through Michael’s newest book, the bestseller Job Moves: 9 Steps for Making Progress in Your Career, and map its implications back to K-12 schools and students through Diane’s startup, .

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane.

Diane Tavenner: It is really good to be back for a sixth season, and it’s especially good because I’m recording in person with you.

Michael Horn: We always treasure those times when we actually get to be face to face, not in front of the video cameras. And that’s another perk because, Diane, the other people in the audience listening to us, they don’t have to see us. That’s a good thing.

Diane Tavenner: So some folks have been wondering if we were coming back for the sixth season, given how late it is in the school year. We wanted to just be transparent about what’s going on. And so two things. First, we’ve always wanted to come back. We get tons of feedback and questions and suggestions that are totally awesome and interesting, and it just suggests to us that there’s a lot of people across the education spectrum who are listening and getting some value. So we want to be here. And our roles have been changing and our schedules have been changing, and they’re a little bit less predictable. And so there are just some logistics we’ve run into.

But here we are. And excited to be here.

Michael Horn: Yes, indeed. We’re figuring it out. You have taken a new job over the last couple years, which will be directly applicable to today’s episode, obviously. I teach in the fall, and then I’ve learned teaching while putting out a new book that we’re going to talk about. It’s just really busy, and I don’t know if I would have repeated that if I had the chance, but now we’re here in person, we’re doing this, so let’s talk.

I would say our curiosity is really leading us to focus on some books that not just me, but other folks have coming out. And also artificial intelligence. AI is everywhere in the education landscape. People are asking a lot of big questions. Frankly, we are asking a lot of big questions. There are a lot of hot, polarized takes, and I think that’s never been our thing, Diane.

Diane Tavenner: No, I mean, you know we’ve always talked about our original motivation. And the reason we started this podcast is because we wanted to think about third way solutions. We wanted to think about bringing groups together for really meaningful, purposeful engagement and education and solutions – things that would move us forward. And so, you know, I think that combined with the fact that we both share a very strong belief that schools are in desperate need of redesign, I think maybe growing more desperate every day.

Michael Horn: Maybe that is our hot take. But we’re different from the poles in that way鈥

Diane Tavenner: Right, right. And, you know, they have to change in order to meet the needs of today’s learners as well as our society. And when the pandemic began, we both thought it would be finally this catalyst that we needed to accelerate the change. We thought we could maybe contribute to that by highlighting what learning could look like and elevating sort of third way perspectives and solutions for how to get there. I don’t think either of us are satisfied with the progress that’s been made since we started this several years ago. But we remain optimists and determined and so here we are.

Michael Horn: Those are good words to use, I think, to describe how we both feel. It’s also one of the reasons AI is so interesting to us, because we do think it’s an important tool. And I’ll say that again. It’s a tool, not the ends. So do not expect us to talk about AI for AI’s sake, but rather in the context of learning and the learning environments we create. And I’ll say in all candor, as we start this season, like I don’t think anyone really knows its ultimate impact. Anyone who does, they’re lying because it’s a lot of theorizing right now. I remain incredibly curious about it. I would say I’m very malleable still in my thinking.

Michael Horn: Maybe “Malcolm Gladwellian,” if you will, if that’s a phrase. I don’t know if I’m going to reverse everything I’ve ever thought, but I’m really curious about where it will and won’t have impact. What’s positive and negative about that, the timeframe over which it will happen and want to learn a lot about that. I will also say I think it’s important to note because it’s on the minds of a lot of folks. We are obviously statement of the obvious about to have a change in federal leadership and the President and the administration. And there are a lot of questions, of course, about how that might influence or impact what’s happening in education as well.

Diane Tavenner: Yeah, and I think one of the things we do is we lean into topics that arise and certainly, you know, there’s stuff that’s going to be coming our way and when we think that we can bring a useful perspective or make a contribution, we, we get together and talk about it. And so I think we can expect some of that over the next year. We’re not exactly sure what it will be, but I think we can expect it. And then I think finally, you’ve always spanned K-12 higher ed and workforce. My work continues to expand as well. And so I think we’ll always or continue to center K-12.

You know, we hope to help folks see all of the connections between these, you know, sometimes siloed elements of education and learning, because there really is a bigger, broader picture and set of connections.

Michael Horn: I’m glad you’ve come over to the dark side of not just K-12, but, you know, look, K-12 at some level is a dependent system on higher ed in the workforce, and those are extraneous macro conditions that impact what K-12 is preparing students for. So it’s a really important conversation to frankly set the context for our schools.

Diane Tavenner: Totally. And so with all of that context studying as we launch this new season, I am really excited for this first conversation. Michael, your new book came out literally yesterday as of recording time, and I really wanted the opportunity to interview you about it. We had such a fun interview this summer with David Yeager around his. His book that came out 10 to 25.

And I just wanted to do a reprise, you know, like, how do we do that again with your book? You being you, when I suggested this, you said it should be a co-interview. And I was like, I don’t have a book coming out.

But you rightly pointed out that your book is so related to the work that I’m doing and this is new work for me. At first I thought, you know, well, I don’t know. And then I really read the book and I was like, okay, this. This could be interesting. Usual. You were right. So we’re going to have this kind of hybrid book talk today.

Michael Horn: Well, you actually were showing me a version of the product platform that you’re building. And I was like, holy cow, we did it again! Unintentionally. We have wound up with a lot of similar insights. We come there different ways. We do, but we often find ourselves in these places of convergence.

Diane Tavenner: Yes, indeed. It鈥檚 awesome.

Well, let’s start with some basics. Your newest book is called “Job Moves: 9 Steps for Making Progress in your Career.” You have two co-authors on this one, so Ethan Bernstein and Bob Moesta. And the book was released on November 19th. And so I guess my first question to you is, why? Why do you decide to write this book and why is it so important, especially given this moment that we’re living in? Like even more important than when you started writing it, I think.

Michael Horn: Yeah so I will say there’s a personal story to that and then there’s like the story of why we think this is the right book for the moment. And I’ll lean into that second one for a moment because what we saw obviously during the COVID pandemic was the great resignation in the United States. We saw literally unprecedented numbers of people leaving jobs, trying to make progress in their lives and then frankly, unprecedented numbers of them really dissatisfied with the moves that they had just made. And I’ll say 1 billion people roughly every year worldwide switch jobs. In the US we switch jobs every four years. And we have a lot of evidence, according to Gallup, Pew others that at least 2/3 of the workforce in the jobs they are presently are completely disengaged, quiet quitting, whatever you want to call it. And so our basic sense, I think is that we make progress at some level by switching jobs, but it does not line up with how companies think about progression.

And we want to help empower people to realize you get to hire your next job, treat it like product development, prototype what you could be doing and figure out the trade offs you’re going to make. Like what’s a better or worse fit for you so that you can get the progress that you’re really prioritizing. So that’s where we’ve landed and why I think it’s so important. And we’ve obviously pitched. You know, we talked last year on this, on this season about your own switch. But like, just to remind folks, you started thinking, “Hey, college for all is not the narrative either.” Careers in K-12 schools and the jobs and what people like doing is a really important thing to start figuring out. So maybe talk about that as well.

Diane Tavenner: Well, first of all, all that really resonates for me and it’s just, it’s stuff I know, but when you just lay down all those stats that way, it just is a really profound, it’s so important. That’s why we’re doing this work. So here’s what I would say. One of the fun parts of being in a startup is that I get to spend a lot more time with young people than I did when I leading a much bigger organization. And you know, over the last year we’ve been working directly with high schools and their students to build Future, the Future Platform, which is a life navigation platform and it’s really designed for right now, young people, ages 15 to 25. And you know, our small team is made up of college interns and recent college grads. We’re building for this group.

We need to, you know, be this group, except for a couple of us sort of older and grayer folks. And working with them has been so fun and inspiring and enlightening. And you know, we set out to build Future because we didn’t think anything like it existed. So as you’re going through your list, you’re like, there’s all this reality and we don’t think, you know, when we look around we’re like, how do high school and college students figure out what life they want to lead and what careers will enable that life and how to connect that to the day to day decisions and activities they’re engaged in. Which, by the way, may very well be college. But college is a means to that end. It is not the end. And I think that’s where we went wrong or went sideways for quite a while.

And I’m saying “we” in the, you know, grander sense there. And so currently there’s a bunch of technology that’s designed to manage the process of applying to college. There’s a bunch of websites, you can search for information on careers, but there’s nothing that meets you where you are and kind of walks beside you for a decade plus as you figure out you are what you want, what the world has to offer, and where those two things intersect and meet. And so even though your job moves and future are focused on people at different ages and stages, one of the things I noticed immediately was that you identify four primary questions for why people seek to change jobs. And those seem to be so similar to the motivations of young people who I’m talking to and working with. And so let’s talk about those four. Will you tell us about those four motivations and what you learned?

Michael Horn: Yeah, absolutely. So we did the jobs to be done methodology, I should say that, which is like, explains why people switch behavior. One of the big things is “Bitchin’ ain’t switchin.” Just because you’re complaining about something doesn’t mean you’re going to actually switch behavior. We want to see people who’ve actually made switches and then we code for the pushes and pulls. So the things that are driving them away from the status quo and I’m pulling you toward this new future. And then we cluster them. Okay, so four quests.

First one, get out. These are people who are like, just ain’t good. It’s going nowhere fast. Managers, you know, we’re not vibing. The job description is not working. I want to reset how my energy is being used and how my capabilities are being used. I need to find some place better and quick. The second quest is what we call regain control.

And these are people who are really like. I actually like a lot of what I get to do on a daily basis, like how it uses my capabilities. But I don’t like how it energizes or makes use of my time. I feel like that’s out of control. This could be like I need more work life balance. It could be I actually want to figure out how or where I do like hybrid work. Right?

Has become a big deal remote work. It could be that micromanaging boss. My energy’s out of whack. The third one is what we call regain alignment. So these are people who basically say the opposite. I really like how my energy is being used in my time. But I’m feeling disrespected for the skills that I bring to the table and what they’re being what I’m being asked to do. And then the last one, we call these folks to take the next steppers.

This is I would say the closest thing to sort of climbing the career ladder or in our choosing college book, like get into the best college for its own sake. I have no idea why but like I just. That’s what I’m supposed to do. And these are people that like actually I like my how my energy is being used. I like what I’m doing. Let’s take that next step. I will say there’s like U turns in this one as well. We profile some people where that is but it really is fundamentally for all these quests.

And we’ll get into this something that I’ve learned from you which is the 鈥渋ngs鈥濃攚hat you’re doing, not what the title is and the perks in the surface level. Like does what you do on a daily basis really line up with the things that give you energy and are the skill sets that you’re good at? And as you know, those are interdependent.

Diane Tavenner: Totally. For those people who work in K-12 and specifically in high school, I and specifically with seniors in high school, I suspect they recognize a lot of connection there. So when I read these motivations I was like, oh my gosh. This is is describing high school kids. They want to get out. They’re maybe not regaining or real control or realigning. They’re doing it for the first time, really.

Michael Horn: I think that’s right. And we also, you’ve noted to me we don’t really give high school students in our present design of schools the opportunity to, like, go deep in something and then be like, “Oh, I actually want to regain alignment because I’ve gone off somewhere.” Right? Like, we don’t actually give them those choices.

Diane Tavenner: Right, right. But as you describe the energy, so many kids in high school are like, my energy is not here. This is not feeding me. Like, I. I could be out doing things, making money, you know, and I don’t feel respected. There’s tons of high school kids who don’t feel like what they can do and are capable of doing are being illuminated or highlighted. So I just saw so many connections there, and I thought it was such a great way to start the book. We’re going to get very practical here, but let’s spend a couple of moments on the research.

There’s a ton of research. Oh, my go, gosh, buckets of research underlying your book. And the same is true for our platform as well. So let’s just spend a couple of minutes on some of those key points that really matter to you and connect to those nine steps with the, you know, journey. And again, I’ll. I’ll point out, I bet there’s going to be some intersections there. But let’s do that for a few moments.

Michael Horn: Sounds good. So I’ll just say, like, we actually… Ethan’s a qualitative researcher. He’s a professor at the Harvard Business School. Bob Moesta is the “Jobs to be Done” guy. He created the theory. He loves to do interviews. Over the course of a decade plus, we collected data on over a thousand individuals making the choice to switch jobs.

And then Ethan designed an entire course around it, which allowed him to coach literally hundreds of people in lots of different career walks. Not just like your HBS students, because it was an exec online course. So, you know, they’re construction workers. Like, it’s a pretty wide range to actually start to build processes and protocols. And then Bob actually, when the pandemic hit, Clay Christensen died. This is the personal side of the story. And the three of us agreed within a few weeks to write the book with each other. Bob started prototyping with cohorts, actually coaching them through the process.

And so we built a first process. He then improved it in a second step. Then a third step. He tried to break it by seeing how fast. What if we limited time? Like, how are all the ways we can purposely break it and then the fourth and fifth were like, let’s put it back together with what we’ve learned. And that’s what’s in the book.

Diane Tavenner: Yeah. That’s awesome. And again, so parallels. You’re doing this in a more analog version.

Michael Horn: Yes. And you get to do it in a digital.

Diane Tavenner: I’m doing it in a digital. But so, so, so similar. And, you know, I think what I’m drawing on is the research around how young people develop, the learning science behind that. The power of purpose in driving. You know, the striving for a good and fulfilled life. And that’s all present in what you’re doing.

Michael Horn: And I would. Yeah. On that front. Right? Like, I would say, we pulled in a lot of those unintentionally throughout or maybe intentionally. Of purpose was a big one. Progress is really what jobs to be done is all about. That’s connected.

And then Ethan, obviously, being a professor at HBS and sort of the HR person has a mountain of research on a lot of stuff around. Like he’s the transparency paradox guy. Like, when is that actually a good idea, when is it a bad idea and things of that nature. And so we got to pull all of that in as we were building these.

Diane Tavenner: Yeah. Yeah. And I think what’s cool is that, and this is a thing we’re both committed to is that research for its own purpose is not useful.

Michael Horn: Not very useful.

Diane Tavenner: So we want it to always be applied. And so, you know, what we have, what we are building is the app is embodies the application of that research. And so we’re very committed to the research, but in that real way. So let’s just jump into a few of the steps. I’m not sure we’ll get through all nine.

Michael Horn: Let’s not do all nine. Let’s focus on the ones that are interesting for your purposes as well.

Diane Tavenner: Okay. So I love this [second] step. Energy drivers and drains. And you just sort of alluded to it. But let’s dig in a little bit more. It addresses so many of the challenges I have with traditional career coaching. So. Yeah.

Michael Horn: Oh, boy. So I want to hear this on the back end because it occurred to me we wrote a book, for people, frankly, who’ve had at least one job and then the backward mapping of into the K-12 and higher ed processes I actually think is your platform does it pretty naturally. But this big first one is not a new idea. A lot of people have written about understand what energizes you, what drains your energy, how that changes based on context. You know, Bill Burnett, design your life like a lot of this stuff. Right? But what I think we did uniquely here is we want you to look at your. Your actual experiences and reflect on times when you were in flow and your energy was really turned on and it was building and so forth.

And at past work where it was draining that energy. Now, for someone in the job market, we’re looking at past jobs, past roles you’ve had. My sense is if you’re a K-12 student, it’s looking at the projects you do, the times you’re in classes, the extracurricular activities you’re involved with. And then I think this is where your ings come into Diane, and where you’ve built around this a little bit.

Diane Tavenner: Yeah, yeah. I mean, so I think you’re exactly right. So one of the things I notice and observe both in K-12, but also in anytime people are sort of coaching or helping people figure out career paths is it’s a pretty common practice that they give people sort of this, what I would call a black box assessment that is somehow going to figure out what your aptitudes are or, you know, what you’re going to like. But ED is a black box. People don’t understand what’s going on in that assessment. And what it usually spits out is either some very high level things like, you know, you’re a whatever, or I’m not even thinking I’m a good whatever because I never pay attention to these things. But you know what I’m talking about.

Michael Horn: Yeah. No, you said you ship to my class. Right? Like, you know, these. This is your fixed personality, so to speak. Or this is your fixed, you know, aptitudes.

Diane Tavenner: Right?

Michael Horn: And therefore you should be, you know, communicator. Right? Or you should be. Mine is like writer, private equity, like three others. Right? And you’re like, what careers?

And I mean writer. I guess it landed. But you know, when I might.

Diane Tavenner: Well, when they get mortician, they’re like, what? What are you talking about? For the most part. And so I don’t like that black boxiness because the whole point is we’re empowering individuals to figure out the life they want. And so what I love about this is they’re actually reflecting on and thinking about things they’ve already done to apply them to the future.

Michael Horn: Well, stay with it. Right? This is the big flip in the book, which is that most places think of job seekers as the supply side, like the, the available pool of talent and the jobs out there as the demand side. Companies demanding workers. Our notion is you flip that. That the individuals, right, have to actually learn about themselves so they can figure out what they are demanding.

Diane Tavenner: Yes.

Michael Horn: As they go seek out work and that they are the demand side. So it’s a flip from labor economists, but it’s what I’ve learned from you about the importance of agency, frankly, building this metacognition about what really makes you tick and then being able to pattern match well.

Diane Tavenner: And this is exactly the flip I want high school students to have is I want them to, whether it be applying to college or the career they’re thinking about, I want them to see themselves as the people who are making the choice. And I think one of the, you know, challenges that the College For All movement and exclusive colleges have created is that young people feel like they’re just trying to get someone to pick them and that it’s very arbitrary. And they don’t, you know, they’re. It’s not clear what to do versus feeling totally empowered to be like, no, I’m gonna decide who I am and what I care about and then I’m gonna go find the fit for that. Totally. So I love this. Let’s. Let’s talk about another one.

So there’s this idea, and it’s very connected, this idea of the career balance sheet and the assets and liabilities, which in my view is such a positive kind of flip from what we normally hear, which is like strengths and weaknesses. So talk about that contrast and what you’re doing here.

Michael Horn: Yeah, absolutely. So, you know, the big thing, right, is that again, this sort of strengths and weaknesses, which I like, I think is useful input and data, but it’s a very fixed perspective on what an individual is. How many of you have taken Myers Briggs and like come out with a pro personality type and then realized, actually, in this situation I’m quite extroverted, and in this situation I get a little withdrawn and like, my introverted side comes out. And context is really important. Todd Rose talks about the context principle, right? And so our big thing. And then there’s Carol Dweck’s work around growth mindset that you can actually build capabilities over time. And so this is the big idea, right, is that we actually have these career balance sheets. Boris Groysberg, the professor at HBS, there’s that research, came up with that idea.

And basically what he said is that assets from an accounting perspective are resources that have future economic value that are acquired at cost. And so your capabilities, if you will, your assets are skills, your knowledge, your ability to do things, also your credentials and degrees and things of that nature that have value and they’re acquired at a cost. And that’s the liability side. What’s the time and money it takes to actually learn that third language, if you will, to actually become a coder? These things don’t happen magically, which is, I think, frankly, another weakness of a lot of these things is like, oh, you’ll just learn these skills and do it and no one asks you, what’s the trade off in terms of the time you have to invest. Oh, go be a doctor. Well, you gotta get through organic chemistry, at least in our present system.

So is that gonna work for you, that investment? And so that, that’s basically the idea. And then I guess the last thing I would say is we also want people to realize that these assets you build, they have a shelf life. They depreciate over time. Your degree will be a lot less valuable 30 years from now than it is when you first perhaps come out of college. Your technical coding skills, we know those are eroding faster than ever, thanks to AI, maybe even faster than that. And so what’s the useful life of each of these assets you’ve built? And be like brutally honest with that and then really understand what are the trade offs of, like, where you want to go in developing your further assets? The last one, I’ll say this, we talk a lot about the importance of social capital and network. It is, but those have shelf lives as well. Unless you’re consciously reinvesting in them to build them up in the directions you want to go.

Diane Tavenner: Totally. This is so aligned with how I have experienced some of the best folks across the country starting to talk to and engage with young people about their futures. And they’re framing it in the language of ROI or return on investment. I think we’re talking about the exact same thing here, which is this idea of like, we need young people to realize, like, whatever you’re doing post high school, you are making an investment that is a liability.

Michael Horn: When I saw that on your, in your platform, I was like, oh my gosh, like, alarm bell. I was like, this is the same thing. It’s just a different age and stage.

Diane Tavenner: It is. And so what we’re trying to show them is like, think about not only your, your money, but your time, because that is your most precious resource.

Michael Horn: That is your most precious resource. I mean, right when people talk a lot of times, and I’m now talking about adult learners, for example, about their lack of resources to, you know, they’re working three jobs and they’re trying to get the degree to get ahead. Time poverty is the biggest poverty they face.

Diane Tavenner: Totally. Well, I mean, I feel that right now.

Michael Horn: Right? We feel it right now. Yeah.

Diane Tavenner: Literally. So we talk about that return on investment, like what do you, what can you spend and how quickly do you need to have that start paying off? Like what is it actually going to buy you? Buy a good return. Right? Like you’ve got to invest in assets that are going to get you the return you want. And I, I fear that a lot of young people don’t even think about their time or their money into college as investments. And so there is no sort of plan to get a return on that. And as a result, so many are not getting a return on that investment. And so they’re, they have massive debt, not just financial debt, but, but this sort of more skill, knowledge.

Michael Horn: Yeah, I mean we call it like this is how careers go bankrupt when the liability side is bigger than the assets you’ve built and frankly are misaligned. And this is where these things are interconnected; misaligned to what gives you energy.

Diane Tavenner: Yeah. This is so interesting. We could have a long conversation about how I feel like in education we’ve gone so far away from thinking about money and business that we’ve actually done a really significant disservice to everyone who’s in it. And I kind of know why we maybe sort of went that way, but we went way too far. And I think we’ve got to, we’ve got to pull it back.

Michael Horn: Yeah, yeah, I think that’s right. I think it probably also explains some of the populations that have become more disaffected with schooling over the years. I’m thinking of males at the moment as one example, but I think these are all factors.

Diane Tavenner: Yeah. One of the things I love about the book is that of course you’re asking people to prototype the jobs and the careers that they want. And you know, you and I are both pretty obsessed with prototyping. We talk about it all the time. I was in your class yesterday, we were talking about prototyping and I think we’re obsessed with it because it’s so much smarter to spend time, you know, in a low stakes way, figuring out options and ideas and really sort of digging into them before you actually spend all this time and energy to get into them. And so talk about how you, how this comes to be and what it looks like in the job moves world.

Michael Horn: Yeah, yeah, absolutely. And I think that’s exactly right. What you just said is prototyping is how we learn. And so what we really want you to do in the book is get away from one of the biggest mistakes I think people who are looking for new jobs make is they think like, “Oh, I’m chasing the one job.” And instead we want you to create divergent prototypes, like really far afield. You know, next role, same company, totally different company, same role. And then like different careers. Things I’ve always dreamed about, like, really go spread them wide, A so that you can start to understand and learn about many different careers and how what drives your energy and your capabilities, like back to those ing is what you like doing actually maps onto these different types of roles and start to flush them out.

And I, I guess this is the next piece of it. We really want to help people learn before they switch, not afterwards.

Diane Tavenner: Yes.

Michael Horn: And to do so, as you know, there’s all sorts of things you could do. Job shadowing, you know, the expeditions. Right? You had in Summit, right? Where you’re actually spending real time with real professionals. All that is great. It’s not always accessible to people. And so the other way we do it is suggest is informational interviewing. And this is a very different kind of informational interview from the one at least as a kid I went on where like, you know, my parents would say, like, “Oh, here’s a friend of mine, you know, they’re a journalist. Like, go do an informational interview with them.” I had no idea what to say or ask in those conversations. But here what we want to say is like, you’ve done the reflection on what you want to do and what drives your energy. So figure out is what they do on a day-to-day, week-to-week basis. Where does it align and where does it not align? So you get a real sense of what it would be like to be in this job.

And then the contrasts between these things start to create meaning about where do you want to go next? We, I guess we could talk about how to funnel it down. But I’m curious, like, you’ve built this out a little bit as well. Right? So how do you think about it?

Diane Tavenner: Yeah, I mean, very aligned with what you’re saying. And I think a key point I want to pick up on is like, people are really, you know, attuned to and focused on. And I think you’re seeing more in high schools where people are trying to do More shadow days, more job fairs, more, you know, company visits or employer visits, more informational interviewing. And I think you just made a really important point that we’re focused on, which is those things are all great, but they’re not nearly as good if you go into them cold and not knowing what to ask or not what you want to learn for them from them.

It’s not as good for you. It’s not as good for the people who you are with. And so one of the things we’re doing in the platform is helping young people really do exploration before they get into those experiences so they can make the most of them. And I think your whole steps, your sequence, really helps people get ready for those experiences so they make the most of them. And in our case, you know, we have 868 careers that. And there are all these really thoughtful ways to explore them and figure out, like, what parts of this career are you going to like that match up with who you are and your ings and what you like doing. And so you go into those, those experiences and conversations with a lot more knowledge and with, with what you actually want to figure out coming out the other side and then reflect on. And I think then you talk about moving into ranking those prototypes, which is, which we’re moving towards as well.

And I’m curious, like, what, what does that look like? And then, you know, if people open up really wide, how do they then, you know, bring that back and converge, which is another concept you’ve got in here.

Michael Horn: Yeah, absolutely. And I’ll try to cover it quickly and then ask like, how you guys do it. But also one of the questions that’s always on my mind about doing this in the K-12 environment versus where we are doing it, where like someone’s theoretically anyway going to try to find a job within the next few weeks or month or something like that. So the way we do it is you have these energy drivers and you have these capabilities and we’ve had you bucket them right into the must have the ideally would have and like, okay, I can live without. But, you know, all things being equal, it’d be pretty sweet if it did this too. And then we have you rank these different prototypes and your current job on all of these dimensions. You can think about it of a scale to 1 to 10. And then we’ve got on jobmoves.com, this really simple Google sheet that will literally multiply it out to give you a mathematical answer.

But I think a lot of people frankly have a gut feeling after they’ve gone through this and you start to realize one of these prototypes or maybe your current role really is hitting most of these critical must have things that you’ll be doing. Again, emphasis on the doing, right? That is so important to you and that’s how you learn to your point, I love that point. This learning agenda, that’s how you start to learn, is you start to use the rank your prototypes so that you can converge and say, this is the one or two things I’d really love to get out in the market now and go find for what I could do next. So we hope the math helps. The force ranking of, you know, I’m on an, I’m an eight on working with people, but I’m a two on leading meetings. You’re probably pretty high on leading meetings, I suspect.

And so right? And we understand how that role, you know, fills in against it. You know, people should check it out. I didn’t explain that correctly, but I think when they check it out, you’ll start to see how it works and gives you information about yourself at this moment in time because it changes. So that’s the question I want to ask you is like, how do you do the convergence but also how do you do the fact that like people are changing quite a bit when they’re still in high school? And also the world of jobs is changing so rapidly. Like we have it easy, right? Because that job, presumably it exists. Yours, like it could be totally different in five years from now because of AI and automation.

Diane Tavenner: It could be. And so that’s why I think knowing yourself and who you are and what you care about will always matter a lot because then it’s a matter of matching up with what a world is offering today, tomorrow, in the future. And so that underlying piece of knowing who you are and in a really granular level, like what gives you energy or what doesn’t, or what do you like doing or, you know, all those things is so critical. But we’ve got a experience we call Compare. One of the things we heard is just let us take two careers side by side after we’ve done some exploration and then compare them to each other. And there’s a couple of things going on here. You know, we’re, we’re sort of showing a framework for how you can do analysis about the exploration you’ve done, which it sounds like, you know, you’re using some math and some ranking. We’re doing something similar. And then the sort of head to head of one versus the other really does illuminate what is more important to me than other things.

And it sort of gives some, some credibility to those gut instincts, like you said, or at least makes you talk through them and, and articulate what’s going on for you there.

Michael Horn: And so I think that’s right. And this is, I think, the big thing that our book does. Like there’s other books that have a lot of these notions in them. “Design your Life” is, I think, both of our, you know, one of our favorites. But I think what we really want to help people do is figure out how you make the trade offs because there’s no job that’s perfect. And so we want you to visibly see, oh man, if I take this job, I’m going to have to lead some meetings. But you know what? I’m willing to trade off on that because of all these amazing things I got that are at the top of my list. That’s a trade off I’m willing to make.

Or the one that Bob loves to always say is, man, I’m going to have to have an hour and a half commute, but it’s more money or do I want less money and like it’s five minutes from my door. These are real tradeoffs that like you’ve got to figure out and you have to do it relative to the things that you most want to get in your next role.

Diane Tavenner: Totally. And so Michael, where do they go from there in your process after we’re sort of converging and we’ve done this analysis, just bring us home.

Michael Horn: Yeah, I’ll try, I’ll try to whip through the final few steps quickly for our, for our audience, Diane. But essentially this is all the demand side, right? We’re doing a ton of demand side work around what you want and the trade offs you’re willing to make. So now we switch to the supply side. What jobs actually exist. We’re going to start looking at postings, we’re going to use those interviewing techniques to actually talk to real people and use our network because it turns out 70% of jobs are filled by a network, someone in your network. And the reality, I think, with AI is that’s going to become more in the years ahead. I think social capital is going to get more important.

And so we then help you find those jobs, unpack what they really mean. Are they actually what you think they are? We teach you to tell your story through Pixar. All this reflection you’ve done you need to be able to explain it in an elevator pitch. We help you with that and then we help you. The final step is just a personal cheat sheet so that you know in a really easy way what makes you tick, the work environments where you’re most likely to be successful. But it’s also something that if it’s not too Millennial or Gen Z, you can share with people around you so they know where you’re excellent. And frankly, like, you know, you know, a bunch of my weaknesses, we all have them.

Like, let’s be honest with them. This is where I’m not as good. And can you build other people on the team that are awesome at it? Because frankly, my energy is such that I’m probably never going to really lean into that. Let’s be asset based as opposed to deficit minded.

Diane Tavenner: Yeah, I love so much about that. The, the last quick thing, I had an amazing mentor who always says, like, you know, people spend all this time trying to improve the things that they’re not good at, rather than doubling down on the things that they are good at and being great at that, you know,? So I favor that approach.

I will just say, you know, for those who’ve listened for a long time, you know that my son graduated from college in the spring and he spent the summer working for the Aspen Institute, and then he joined as a field organizer of the Presidential campaign. So he’s just coming off of that. And I ordered the book for him, Michael, because I think it’s like such a perfect moment and way for him to approach this. And it’s funny because so many people really respect what he did. I mean, field organizing is no joke. And they’re like, wow, he probably has a lot of skills and a lot of knowledge, and it’s just like swimming around there.

And I think this process is going to be really amazing for him to make sense of it and figure out where he wants to go next. And so I’ll report back, but I’m excited to see how he progresses through that.

Michael Horn: Well, thank you. I hope it’s a positive one. And I hope for folks listening also that if they check it out for themselves or frankly, if they’re trying to retain a team at a school or a nonprofit, they can use it that way. Or frankly, that they get to see how it maps onto what you’ve built at Futre at Futre.Me right? Because it’s an incredible resource. Obviously, you are architecting for kids that they get to keep with them as they leave high school, which is so important. So let’s use that as a segue.

You bought the book for Rhett. I appreciate that. What are you reading or listening to or watching? Let’s wrap us up there.

Diane Tavenner: That’s great. Well, I have read a ton since we last talked, but the thing I’m immersed in right now is “Nexus” by Yuval Harari. And I will say that I am a big fan of his writing. And because it really provokes me to think differently. I feel like he tells stories and that are very relevant and very current in a way that I’m like, “Oh, I hadn’t really thought about it that way or looked at it that way. And this is no different. It feels like very appropriate to this moment in time. And then you burst my bubble a little bit and told me about how he was being brutally attacked for his research.

And so I did some looking at that as well, and, you know, that’s a longer conversation, but I’m going to stick with it. I think the book is really provocative, especially in this moment as we are coming off an election and into a new administration. And thinking about social media and the media in general and information. Super, super!

Yeah. Making me think a lot. Yeah. How about you?

Michael Horn: No, that makes sense. That makes sense. And look, I think at the very least, he helps us ask big questions.

And that’s the theme of what I was going to bring to you, which is that I’ve been trying to ask better questions, to listen, better not interrupt as much. It’s sort of been a New Year’s resolution of mine. And so I’ve read a trio of books around that. First is “Ask: Tap into the Hidden Wisdom of People Around You.” You have it there for unexpected breakthroughs in leadership in life by our good friend Jeff Wetzler at Transcend. This is not about his work at Transcend, but it’s an incredibly good book around asking questions and approaching problems with curiosity.

And then I read Hal Gregerson’s book from, I think it was 2018, where it’s called “Questions Are the Answer: A Breakthrough Approach to Your Most Vexing Problems at Work and in Life.” Great book as well.

And then I’m rereading the book that I suspect you like as well, which is “Never Split the Difference” by Chris Voss.

Diane Tavenner: Love, Chris.

Michael Horn: So good. So good. And I felt like his Masterclass is amazing. I just felt like, okay, I need a refresher on this, because a lot of the stuff that, like Amanda Ripley and others write about in terms of deep listening and frankly, the jobs to be done approach that underpins job moves is all around that deep listening of, like, what is someone really saying and really understand on their terms. So that’s what I’ve been reading.

Diane Tavenner: So cool. I like how you got those all piled in. You know, you, you, you slipped three into  one.

Michael Horn: I’m going thematic, which gives me license. And, hey, it’s our show, so we get to do what we want. But for all you tuning in, thank you for doing so. We look forward to the season to come, and we’ll see you next time on Class Disrupted.

]]>
Podcast: What a Mentorship Mindset Can Do for Student Motivation /article/podcast-what-a-mentorship-mindset-can-do-for-student-motivation/ Tue, 27 Aug 2024 14:30:00 +0000 /?post_type=article&p=732129 Class Disrupted is an education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or.

In this special summer episode, Michael and Diane are joined by David Yeager, a psychology professor at the University of Texas at Austin and author of the new book 10 to 25, which explores key insights into youth development. Together, they dive into the critical lessons highlighted in his book, including the science behind effective mentorship, the significance of transparency and practical strategies to help young people reframe and manage stress.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. How are you?

Diane Tavenner: I am well. This is a first for us. We are doing a special summer episode, and for good reason.

Michael Horn: We are trying to break out of the old structures of a summer break where kids go home and don’t go to school. We’re trying to break out of that model that we’ve always done in this podcast and have an important conversation about a book that is upcoming and will be out by the time this podcast is released. So, Diane, why don’t you introduce the book and our special guest?


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Diane Tavenner: I’m excited to welcome Dr. David Yeager to the podcast today. He’s a professor of psychology at the University of Texas at Austin and has a long, long list of accomplishments and works with a number of other learning scientists. I encourage you all to go look at that impressive bio. Let me just share personally that we met about a decade ago, and I have always been such a huge fan because David’s work is so applicable to schools, young people, mentoring, teachers, and parenting. He is, in my view, one of the rare researchers who not only has a background in those areas but is deeply committed to making sure his research is actually meaningful and embedded in practice. Over the years, we’ve had tons of incredible dialogues and conversations about very practical things in schools. He had a huge influence on our summit learning model when I was at Summit. I am so excited for his upcoming book called “10 to 25.” 

It’s all about mentoring, which is a huge part of what I have worked on and focused on in my career. I am thrilled that you’re here with us today to have this conversation. David, welcome.

David Yeager: Thanks a lot. It’s great to be here. Diane, I think it was twelve years ago we met.

Diane Tavenner: Wow, Yeah.

David Yeager: You were my favorite person. We met at this crazy meeting where we were briefing thought leaders in education reform. The last question of that interview was, “If you could do one thing, what would it be?” Whatever I said, a week later, you’re like, “Okay. So we did that thing you said, now can you help us?” I was like, I love Diane Tavenner. She’s just gonna make it happen. So I’ve always been your admirer, and it’s great to be on this podcast.

Michael Horn: It’s not just talk with Diane, it is action.

David Yeager: Yeah, be careful what you say. She’ll do it.

Filling in the blanks on youth motivation 

Diane Tavenner: Well, thank you. We are thrilled to have you. I wanted to jump in. This is going to be kind of silly, but I think it’s meaningful. Your new book introduces what I would call a Madlib activity. It’s like a fill-in-the-blank activity and the fill-in-the-blank sentences. I know you’ve asked a bunch of people to complete this, so I’m curious about the different responses you’ve gotten. It starts with this idea: The sentence, “Given that young people are ____, the best way to motivate them is ____.” I鈥檇 love to know your response to that. Also, what do you normally hear from people when you ask them to fill in those sentence starters?

David Yeager: Let me just start with the most common things I hear. The most common thing I hear is, “Given that young people are kind of short-sighted, lazy, hard to motivate, not listening to grown-ups,” or something like that. Something kind of denigrating. Then you tend to see one of two things. One is 鈥淓xplain to them why all their choices now are not quite right and why they’re not aligned with their long-term best interests or motivate them with either threats or rewards.鈥 So, “If you do this, something bad is going to happen to you,” or “If you do this, I’ll give you this nice thing.” Either bribes or threats. That’s the most common answer I see. The second most common answer I see is, “Given that young people are stressed out, overwhelmed…”

Diane Tavenner: Addicted to their phone.

David Yeager: Right. Addicted to their phones, recovering from COVID, lonely, in the middle of a mental health epidemic, etc. The best way to motivate them is to remove their demands, chop up what they’re doing into tiny steps, help them feel a sense of success, let them feel confident, don’t overwhelm them. Basically, make it easy on them to grow up. Both of those internal logics make sense, but neither of them are great. The big punchline from my book is when I started studying people who do an awesome job at motivating young people, even in the most difficult of circumstances, they complete the sentence with, “Given that young people are capable of doing incredible things that make contributions to the world, the best way to motivate them is to inspire them, sometimes to get out of their way, to run interference, so that way things don’t derail their ambitions and hopes, but really support their potential to come alive.” I like this exercise because it reveals how our beliefs about young people are intimately tied to our practices and how we deal with them. That sounds obvious when I say it, but it’s not obvious to most people. They just think, “Okay, the best way to motivate people is the following,” and they don’t question the fact that that’s a choice, and it comes from a belief system, and it’s something that could be changed.

David鈥檚 Motivation for Writing 10 to 25 

Michael Horn: It’s really interesting. I’m feeling jealous at the moment because Diane’s had the chance to read the book in advance, and I will read it once it’s out. What motivated you to write this book, 10 to 25? What was your intention? What’s your hope for the book?

David Yeager: For me personally, the book comes from 15-20 years of frustration, feeling like the advice I had been given as a teacher and later that I saw in the research literature just wasn’t cutting it. It wasn’t good enough. I remember being a mediocre middle school teacher and caring so deeply for my kids and wanting to do everything for them and feeling like I never got that kind of inspiring, enthusiastic love of learning, where kids were embracing the hardest stuff and coming after class because they were curious about the topic. Then when I started doing research, I also felt like the answers I saw in the field were very… I don’t know, just not useful. It was very abstract and bland and not applicable. We’ve conducted a lot of research over the last 15 years, and part of the book is, “All right, let’s put that all in one place.”

I’m often asked about this part of my work. Some people think of me as the community college student success person, others as the purpose-in-life person, others as the youth mental health, and others as the growth mindset. I wanted all the work to be in one place, but the other thing was just an acknowledgment that there was a lot I didn’t know, and I needed to go out in the world and find great leaders who were awesome at motivating young people. The book is a combination of the science we’ve done over 15 years and original reporting on what I’ve learned from the wisdom of practice, I guess you could say.

The Mentoring Mindset

Michael Horn: Very cool. I’m curious, then. Diane teased that a lot of this book is not just about motivation and how to spark students, but a part of that is this mentoring mindset, I think you call it. I’ve certainly bought hook, line, and sinker on the importance of mentoring, but the mentoring mindset is a phrase that is unfamiliar to me. So, what is the mentoring mindset?

David Yeager: Yeah, the mentor mindset is an approach or a philosophy you take with a young person where you maintain very high standards. You’re tough, you expect a lot, but you’re supportive enough so that a young person can meet those standards. It’s not just saying, “Hey, I have super high standards, you can meet them or not,” which often ends up with maybe the top 5% doing well and everybody else struggling. It’s not saying, “I care about you, but I’m not going to ask a lot of you,” where maybe kids feel supported but they don’t grow and improve. The basic mentor mindset is high standards, and high support. It’s a simple idea.

Where does that come from? It comes from this investigation of the most successful people I could find in K-12 education, higher ed, academic research, NBA coaching, parenting, management at retail, grocery stores, management, and technology firms. I wanted to look at anyone who’s in charge of or relates to someone aged 10 to 25 in any of these domains. What do the most successful people have in common? The answer was this mentoring or mentor mindset. In the book, I describe it and also describe what’s the opposite of that. What happens if you don’t have that?

Diane Tavenner: Michael, you’ll love it because it is a two-by-two because you always have.

Michael Horn: You’re saying I’m going to feel at home is what you’re saying.

Taking an Asset-Based Approach

Diane Tavenner: You’re going to feel very at home. I love the mentoring mindset because it embodies the belief system that I’ve had for my career, this idea of high expectations and high support. Let’s just put names on the other ones that you were describing, David. There’s this enforcer mindset which is like you were describing, high expectations but no support, and this protector mindset which is high support but no expectations. One of the things I love in our conversation is you never start from a deficit mindset. You’re always an asset-based approach where you’re like, “Look, even those other two places have one of the two parts of the equation, so they’re halfway there. We just need to get the other half in there, if you will.” Say more about that.

David Yeager: Yeah, I think there are two ways in which it…

Diane Tavenner: Hopefully, I explained that properly.

David Yeager: Yeah, it was great. Later on the test, I’ll give you a high score. As a professor, I’m just walking around grading everyone. Just kidding. There are two ways in which we try to be asset-based. One is that suppose you’re in one of these off-diagonal cases, the enforcer mindset: all standards, low support; protector: all support, no standards. That’s coming from a good place and I started to talk about that. Then the second is, as you’re saying, reframing those two off-diagonal cases as you got half of it right, so just add the other half. Why do I say they’re coming from a good place? Well, I think for a long time people have felt torn. If I’m a manager, a boss, a teacher, a professor, I have a dichotomous choice between being the tough, authoritarian, dictator, kind of hard-nosed person who demands excellence. The negative consequence of that, of course, is kids and young people are crying and feeling debilitated and crushed. Most people don’t succeed.

But that is viewed as a necessary side effect of me upholding high standards. You can see how you could put your head on your pillow at night and feel good about that. It’s like, “I’m the gatekeeper to excellence and high performance, and I’m doing what I have to do, though it’s sometimes unpleasant to uphold the standard for culture or society or performance.” On the other side, where you’re very low standards but high support, what I call the protector mindset, there too, you can feel good about how you’re caring. You love young people. You’re putting their feelings and needs first. You’re being empathetic. You’re very attuned. Those are all good things to feel. The problem is that you’re also a pushover and young people don’t get anywhere. But it might feel like that’s the necessary consequence of protecting young people from the distress of this dog-eat-dog world that they can’t possibly succeed in. Both come from a concern for young people, both the enforcer and the protector. They’re just a little misguided.

The reason they’re misguided is because they’re embedded in this worldview we have about young people generally being incompetent. If you think they’re incompetent and I have to be tough, well, that’s enforcer. It’s like, “I need to maintain the standards, and I’m the last defense against the world descending into chaos.” That’s why I have to maintain rigorous standards. On the protector side, they’re incompetent, they’re weak, but that’s why I have to make up for what they lack by protecting them.

Diane Tavenner: Yeah.

David Yeager: So the mentor is like, “All right, let’s just take both of what’s good from those. You’ve got the high standards. Great. Add the support. You’ve got the support. Great. Add the standards so you can have two reasons now to feel good about yourself at the end of the day, not just one.”

The Transparency Statement

Diane Tavenner: Yeah, I love that approach. The book is filled with the science that’s behind it. One of the things I appreciate about you is it’s not only all the science and research you’ve done. You are highly collaborative, and you have an encyclopedic knowledge of all the other research in the space that everyone else has done. You are very generous in bringing those ideas into the book. We are not going to spend a lot of time on the science here today because we want to, given our audience, go to the practices that you put forward. But I will say for people who want to do a deep dive there, I’ve listened to the Huberman Lab podcast that you did. It鈥檚 3 hours, and it’s an extraordinary deep dive in that space. So I highly recommend that for people who want to go really deep there along with the book if you want to listen. I want to shift us over to these mindset practices. They’re particularly profound here in conversation.

Honestly, when I looked at the titles of these chapters and when I started digging in, these are things that Michael and I talk about all the time on the podcast. These are cornerstones of, in our view, what redesigned schools and learning experiences need to be building on, incorporating how they need to function, essentially. We are deeply aligned in our agenda for what learning can and should look like. Let me just say off the top because our listeners will recognize these. We’ll start with transparency, which is a really interesting intro. I think you say these go from easiest to implement to probably most challenging. So we’ll talk about that. Transparency, questioning, this reframing of stress, and then purpose and belonging. 

Again, our listeners have heard us talk about purpose and belonging sort of at nauseam, but we can keep talking. Let’s start with transparency because you have this very, very, I would say, easy lift that people can do, called a transparency statement. Tell us about that. What does that look like? How does that get you off on the right foot, quite frankly, in your relationship with young people?

David Yeager: The transparency statement that I write about is very simply explaining your motives whenever you are about to uphold some high standards and/or provide some support so that young people don’t interpret it in the worst possible light. That can be very short. Let’s take Uri Treisman, the world’s greatest freshman calculus professor I write about in chapter eleven. He’ll give students large intro courses in calculus, five problems where they have to find the limit of a function using L’Hopital’s rule. The thing is, most kids, when they take AP calculus, memorize L’Hopital’s rule, and then they just apply it to find the limits of functions. But the problem is that L’Hopital’s rule is not an analytic solution. It’s like a workaround.

So it doesn’t work. It breaks a lot. He’ll give students five problems, four of them L’Hopital’s rule won’t work for, and one it will. A normal teacher doesn’t do that. A normal teacher would think, “You’re a lunatic because they’re going to cry,” basically. Before he does that, he’s like, “All right, I just want you to know the reason why I’m doing this is because you guys are preparing to be mathematicians and to think mathematically. I want you to have careers long beyond this class. I don’t want you to apply math tricks. I want you to be able to take apart the math tricks, figure out how they work, and put them back together again.” He says that before they spend 25 minutes struggling. If you don’t, they would be in tears, thinking, “I’m dumb at math. I’m going to fail calculus. I’m never going to be a doctor or an engineer.” That’s where a freshman’s mind is going to go. You have to say something. In a world in which he says nothing and there’s crying, tears, and frustration, that’s not a great world. The most marginalized students are going to quit first because they’re also dealing with other stereotypes about whether they’re smart enough, etc. But in the world in which he has a transparency statement, it’s otherwise the exact same lesson and the students have the exact same great professor, but it means something totally different in that context.

That’s why it’s the easiest. You can already be awesome at mentor mindset stuff, high expectations, and high support, and you could be coming across the wrong way to your young people. Sometimes all you have to do is remind them of why you’re giving them something that’s a little unpleasant. The societal narrative currently about young people is, “Well, I shouldn’t have to explain myself, because if they weren’t such woke, wimpy idiots, then they would know that I’m here for them.” There’s a version in which people, adults and leaders, think, “I shouldn’t have to explain myself.” My answer to that is, look, for most young people, starting at the beginning of gonadarche and puberty until they’re in their twenties, that day you’re talking to them is the day on which they have the most testosterone they’ve ever had in their entire lives. That day and the next day when you do something else, that also will be the day on which they have the most testosterone they’ve ever had in their entire lives, both boys and girls.

That does all kinds of things to the brain that makes them over-interpret things that might be plausibly offensive. That’s why their head goes to this crazy place of, “I’ll never succeed,” or “You hate me,” or “This is biased,” etc. You just have to explain yourself two or three more times than you think you need to. Not because they’re too sensitive, but because the job of a young person is to figure out if they’re being taken seriously and respected. Just don’t make them guess. Just be transparent.

Diane Tavenner: Yeah. One of the things that comes up in the book is this idea that at that developmental stage, they want status and they want respect, and there’s good biological reasons for that. When we are running counter to that, we’re creating all sorts of distance between us relationally, which makes so much sense to me. I can just say from my career, I can’t tell you how many of the rigorous teachers that I knew purposefully would not have been transparent upfront because they were actually trying to scare kids or create what is essentially a threatening environment because they thought that’s what they were supposed to do with high standards. The science is pretty clear that the effect they were having was not the effect that I think they ultimately wanted.

David Yeager: Right. I mean, I think there’s this mythology of the demanding leader that is impossible to please, and it’s a little bit ambiguous if you’ve won them over. In that mythology, you’re supposed to leave people you’re leading a little bit in the dark for a while and then only at the end reveal that you cared about them all along, but they’re supposed to be afraid for nine months so that way you get optimal performance. I 100% remember feeling that way as a teacher. If I tell them too quickly that I care about them, then they’re going to take advantage of me. But that’s not what the mentor mindset leaders do.

They’re super hard, and students are often crying in the first few months of their classes in college and K-12 settings. But they’re also super transparent so that by October, or November, students can now trust that when they ask a question, Mr. Estrada鈥擲ergio Estrada is one of the teachers I write about鈥”Mr. Estrada, is this problem right?” He’d be like, “I don’t know. Is it right?” Initially, students hate that. But he says, “Look, I would never deprive you of the opportunity to know that you can understand physics. I care about you too much to lower standards. So that’s why I’m asking you the question back. So given that, do you think it’s right?” He’s got to say that for a couple of months. Eventually, students know that and then they start thinking on their own, and they own their own learning. It saves him tons of time. Later in the semester, they become independent thinkers. They go on to the next course in college and can do well. He’s given them that gift of being independent, thoughtful, curious, intellectual leaders, even though it was a little rocky at first because students aren’t used to it. But you’re not going to get there if you wait till May and they hate you all year. That’s idiotic. That’s mythology.

Questioning Techniques: Asking v. Telling 

Diane Tavenner: You’ve led us into the questioning technique. Some of those teachers we’re talking about, their class would also look like the professor not giving them any help or any support. That’s not what you’re talking about. Sergio and others that you profile, don’t they specifically have this strategy around asking, not telling? Tell us the dimensions and characteristics of that approach that are quite different from other folks.

David Yeager: I was really struck by the parenting coach that I followed who is almost always coaching parents to ask questions, not to tell their kids what to do. The similarities between great parenting and great teaching, great tutoring, and good management. The great manager I followed, Steph Akamoto, who was at Microsoft at the time, would do her performance reviews and ask questions like, “All right, how do you think that went?” and so on, get their opinions. Then she would say, “All right, for you to be a top 15% performer on your next performance evaluation, what’s a task you could do that’s above and beyond that would really impress everybody, and that would be something you would want to do and you want to learn?” Then they would generate two or three ideas. Then she’d be like, “Huh? All right, what are you worried about getting in the way of those things?” An example in the book is Steph’s doing a performance review when she was on the software testing unit for Microsoft. They would write manuals that would help the developers know what Windows is doing, for instance. Someone on her team was like, “Well, instead of just testing it and writing the manual, I could go talk to the engineers and fix all the goofy things with the software now, rather than have 20 pages in the manual about how the goofy thing is a workaround.” She’s like, “Okay, what would be hard about that?” “Well, the engineers don’t want to talk to a tester because I’m low status, and the manager is going to be like, ‘Stop wasting my engineers’ time.'” Then Steph would be like, “All right, would you mind if I contacted the manager and said, ‘Get off her case and let her go talk to your engineers?'” “No, that’s okay with me.”

So they formed this whole plan where her direct report could overperform and do something testers weren’t normally required to do. Steph’s out… She’s not doing it for the direct report, but she’s running interference to give her the freedom to be in the room to talk to the engineers. Six months later, her direct report is overperforming as the top 5-10% performer, gets a raise, promotional velocity, etc. But Steph didn’t do it for her. That’s what I mean by questioning. There’s a version of questioning that’s not good. If your kid comes home drunk and you’re like, “What were you thinking?” that’s not an authentic question. What you really mean is, “You were not thinking, and you’re an idiot, and you’re in trouble. I could not be madder at you.” 

That’s what you mean. There are versions of questions that are just about facts. What I’m really talking about is what I call in the book authentic questioning with uptake, where it’s a legitimate question that the person could have a true answer to that, in principle, the asker doesn’t know the answer to. Second, where the question builds on some thinking the person has done. I found mentors did that a lot and did it really well, whether it was the NBA’s best basketball coach, Sergio Estrada in physics class, Uri Treisman in calculus, or Steph at Microsoft.

Reframing Stress

Diane Tavenner: It’s resonating with me on multiple levels because as I build this new product to help young people figure out what they want to do in the future, this was the cornerstone of our approach. We would ask authentic questions of them and help them discover and explore versus the traditional approaches that kind of tell you, “We have this black box questionnaire or test, and then we tell you, ‘Oh, guess what? You should be a firefighter or a mortician or whatever.'” Young people are like, “What are you talking about? That’s not me.” So very resonant. The next piece is a total reframing of stress. Especially coming out of COVID. Michael and I started the podcast during the middle of COVID and everyone, probably at the time, really swung one direction about, “People are so incredibly stressed.”

We have to completely fundamentally change our expectations and our behaviors in response to that stress. I still think there’s a belief that young people and kids are so stressed. This is where I think the protector mindset comes in a lot. The science, though, tells us something very different. We should think differently about stress and then act differently accordingly. Tell us about that.

David Yeager: This was an important chapter in the book because there’s a world in which managers are out there saying, or teachers, or professors, “I’m a mentor mindset. Therefore, I have mega hard expectations for you, and you need to suck it up and just deal with how stressful it is.” That’s not what you see the best mentor mindset leaders doing. They definitely maintain standards. They definitely imply you should stick with it. But they don’t tell you to suppress your stress or feelings of frustration, etc. Instead, they have ways of reframing the negative emotions that tend to come from pushing yourself to your frontiers and reframing them as, one, a sign you’ve chosen to do something important and meaningful. If it was easy, then anyone would do it kind of thing.

But the fact that it’s hard means that you are doing something impressive. The fact that you’re stressed often means you care about it, that it matters to you, and that’s cool to do something that matters to you. Then, second, that those worries actually can be fuel to help you do better. You see that a lot. If you look at great one-on-one tutors or even a good golf coach or tennis coach, they’re really asking you to go take on a challenge. In athletics, choose harder opponents, and if it’s tutoring, choose the harder problems and try them if you can’t master them. Second, that physiological arousal of heart racing, palms sweating, butterflies in your stomach, that’s your body mobilizing oxygenated blood to your muscles and your brain cells, and that’s helping you to be stronger and your brain to think faster and so on. Most people don’t think that way.

They think the fact that I have butterflies in my stomach and my heart’s racing means my body’s about to shut down, that my body’s betraying my goals, and it’s going to get in the way. We talk a lot about the science of reframing away from what’s called a suppression approach. So classic suppression would be, well, as a parent, “Stop crying. Stop being sad.” You just tell your kid to stop feeling the way they’re feeling. But as a teacher, what you often see is, “You’ve prepared. You shouldn’t feel stressed. You’re fine. You can do this. You should feel confident.” You see this a lot. Kids say it to each other, “Oh, you shouldn’t be stressed out.” It’s like, no, actually, you should be stressed if it matters to you and it’s legitimately hard. Reassuring you that you shouldn’t be stressed is a suppression approach. It turns out if you suppress feelings, they just come back stronger and get in the way. The protector mindset leads you to that suppression approach. You feel so bad that you feel distressed that I want you to get rid of it, and I want to get rid of it either by removing the demand or telling you to push the feelings down, you know, push them away, don’t feel stressed, etc. I tell the story in the book about a student of mine who emailed and said, “Look, my mom just died. Most important person to me in the world. I can’t possibly do the assignments for the next couple weeks.

I hope this won’t make me fail, but I鈥檓 just telling you I can’t do it.” I could tell from the tone that most of my colleagues at UT would either imply that she was lying about it and that she had to prove it or would say, “Just take an incomplete in the class,” either to save you the distress or because the teachers are worried about it being unfair to the other students in the class. That wasn’t my approach. I had been thinking a lot about this stress approach, and instead my approach was, “Look, let’s separate the intellectual difficulty of what you’re doing from the logistical difficulty. The intellectual difficulty is you have to do an awesome final project that’s very impressive, that hopefully you can talk about in your job interviews, can be on your resume, and that you’re proud of. I don’t want to take that away from you. That’s why you took my class, was to learn new stuff and do things that are impressive. Frankly, your mom cared for you and rooted for you throughout college because you were doing cool, impressive stuff.

So one way to honor your mom’s memory is to do a great final project in my class. Do I really care that you do the daily busy work that I assigned? No. That’s only there to help you get prepared to do the final project. What I did is I reduced the demands for the logistical stuff, like the busy work, and I was like, just communicate with your group, and whenever you’re ready, come back and then do your final project with them. She took two and a half, three weeks off and just kind of stayed in touch with her group, and then they did a fully kick-ass final project. They created this whole AI-based support to help teachers do empathic discipline rather than very harsh discipline. Three years ago, they did this before GPT was released, and then she talked about it in her interview, got this job for a major financial services group, and now is traveling the world on this rotational program, fast track for managers. She immigrated from Africa, is a very interesting young woman of color who is constantly trying to help improve society and culture.

I caught up with her a year later. I was like, “Did I do the right thing? Should I have just given you an incomplete?” She’s like, “No. Half my professors told me to take an incomplete, but then I couldn’t have graduated on time, and then I wouldn’t be in this financial services mentoring program.” That’s an example where if you have the belief that young people are capable of impressive stuff with the right support, then you start thinking about, sometimes you maintain the intellectual demand or the demand for the work that’s truly impressive, but the way you support them is to reduce some of the logistical demands. I think a lot of people mistake those two. They think being a hard-ass on deadlines is what it means to be demanding. But I think it’s having people own thinking and contributions. That that’s the demand. Deadlines are a means to get there.

Diane Tavenner: I love this chapter. The whole time I was reading it, I kept thinking back because you alluded to this in the beginning, David, but the first two times we met each other were arguably under very stressful circumstances that I would not trade, though. I mean, we were, in the first case, presenting our work to Bill Gates directly, and in the second case at the White House, presenting. If someone had taken those opportunities away from us, I think we would be very regretful. It was stressful. Those are stressful.

David Yeager: So stressful, but it’s stressful in a way where you have to bring your A-game. I think the challenge is to see it as a positive opportunity to perform at your peak rather than a threatening opportunity to fail publicly. When you do the latter, you’re still sweating, your heart’s racing, and you’re worried but doing poorly. But you also are like, all right, let’s go. It’s like if I’m a good surfer on a huge wave, that’s how you want to feel.

Purpose and Belonging

Diane Tavenner: So, David, with our last few minutes here, we’re going to give you the tall task of talking purpose and belonging, which are very significant. I should say the end of your book pulls all of this into whole models and approaches. Tell us the key concept here of purpose and belonging in your work.

David Yeager: I think that, as you know, 10-15 years ago, those were not concepts people talked about in education reform. It was like curriculum and interests were probably the two biggest things. The idea of a meaningful purpose, that wasn’t around. I think Bill Damon’s work brought purpose to a lot of people’s radars, and I did a lot of the early randomized experiments, but even now, I think it’s not as well known. Belonging, for a long time, was thought of as this soft self-esteem boost. Everyone needs a hug from all the world’s friends. It wasn’t taken seriously.

I think the common thread across the two is that they’re super powerful, especially for young people who are trying to make it through the world, having a sense of status and respect. Purpose, because you want to contribute something of value to the world around you. Having a meaningful purpose where it’s something beyond myself is depending on me, that’s super motivating for young people. A lot of education gets that wrong because they just make an argument about making money in the future or using this lesson plan in a job in the future, or it’s a delay of gratification, a long-term self-interest argument. I don’t think that’s ever really going to work to drive deeper learning. But the idea that right now somebody’s depending on you, having mastered something and done a good job, I think that’s really meaningful. In an enforcer mindset, you wouldn’t think of that because you’d be like, well, they’re going to choose the laziest possible way to do things no matter what. The only way we can entice them to do tedious work is through rewards, now, or delayed rewards later.

Belonging is similar in that now that it’s starting to get on the radar, more people are talking about it, but it’s still misconstrued. A lot of people think belonging is, “I’m going to give you a ‘You Belong’ sticker to slap on your laptop, and all of a sudden achievement gaps are going to disappear.” As I say in the book, you can’t declare belonging by fiat. It has to be experienced. One of the big things that has to happen is you have to help young people tell themselves a story of how difficulties could be overcome through actions that they could take. Then over time, they actually feel a sense of belonging in a community. I think that purpose and belonging go hand in hand because one way you know you’re valued by a community is when you’ve contributed something that they perceive as important to that community back in our evolutionary history. I think there’s a lot more in the book and there are stories about how you leverage those two to get deeper, more lasting, meaningful motivations rather than more frivolous things like turning education into a slot machine.

I don’t think that’s going to do it. What’s more important is appealing to a deeper purpose, a sense of connection, a sense of mattering, and so on.

Diane Tavenner: That’s awesome. There is so much more in the book. I can’t recommend it highly enough. I hope everyone will read it and ping us with questions, thoughts, and what comes up for you. Maybe at some point, we can circle back and do even more on the other pieces when we hear from our readers what they think. Michael鈥

Michael Horn:  I was going to say the same thing. Just huge thanks first, David. Check out the book 10 to 25. I got a lot just from this conversation that has whetted my appetite, and I know many others will as well. Let’s circle back once we have some more fodder because I can tell we’re scratching the surface and you’ve hit these hot-button topics that, as you said, David, we sort of know there’s something there, but the full depth of how it’s understood is not there yet in the education field. I appreciate you writing this and joining us.

David Yeager: Absolutely.

Michael Horn: For all those listening, we’ll be back next time on Class Disrupted. Thank you again.

]]>
As Schools Push to Recover from COVID, Turbulent Days for Education Philanthropy /article/glad-im-not-a-fundraiser-right-now-exploring-uncertainty-in-ed-philanthropy/ Mon, 24 Jun 2024 16:30:00 +0000 /?post_type=article&p=728955 Class Disrupted is a bi-weekly education podcast featuring author Michael Horn and 贵耻迟谤别鈥檚 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or.

In the final episode of the season, Michael and Diane welcome Stacey Childress, Senior Education Advisor at McKinsey & Co., back to the show to discuss the world of education philanthropy. Stacey draws from her previous experience at New Schools Venture Fund and the Gates Foundation to analyze troubling trends in the sector. The three discuss what funders and operators can do to grow philanthropic investment in education and better deploy those funds. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael.

Michael Horn: Hey, Diane. It’s good to see you.

Diane Tavenner: It’s good to see you as well. I think the unofficial start of summer has happened. I know that because I had a big graduation last week. My son graduated from college, which is quite surreal. It’s also the last episode of the season, which I can hardly believe.

Michael Horn: First, congrats to you and to Rhett on the graduation. It’s very exciting news. I can’t believe it’s the end of the season. We’ve had the chance to interview many interesting people, and we’ve particularly enjoyed having one guest back on the show.

Diane Tavenner: That’s true. I’m excited to reintroduce Stacey Childress. Regular listeners will be familiar with her. We originally teamed up for a two-part series on higher education and had so much fun that we decided to do it again for K-12 education. 

Hopefully, folks are enjoying those episodes. During those conversations, we had some off-the-record dialogue about a big topic in education right now, and we decided it was an important conversation to have. So, welcome back, Stacey. We’re thrilled to have you here. We’ve covered your credentials before, but today you’re really in the expert seat, having been involved in multiple aspects of philanthropy, which is the direction we’re going.


Get stories like this delivered straight to your inbox. Sign up for 社区黑料 Newsletter


Michael Horn: Hi, Stacey. Thank you for joining us again.

Stacey Childress: I am happy to be here. There are two things I’m reflecting on now that this is my fifth episode in a row.

Diane Tavenner: Yes

Stacey Childress: One, never say anything to you guys in an offhand way because it might become a podcast episode. Oh, we ought to do philanthropy, and now here we are. I’ve learned my lesson.

The second thing is, I feel like I’ve moved from guest to long-term guest, almost like we’re in roommate mode.

Changes in Education Philanthropy

Michael Horn: We’ll see. Diane and I are persuasive. Either way, thank you for joining us. We’re excited to dive into this topic of education philanthropy. As you both alluded to, it feels like the water around philanthropy and education is really churning right now. It feels different from how it has in the past. Maybe it’s my imagination, maybe it’s not. There was recently an article in Inside Philanthropy talking about the changing nature of education philanthropy, which struck a chord with us. Many of our listeners are running school networks, starting education nonprofits, or interfacing with donors. We wanted to dive into this important sector of the education reform movement to discuss how it is or isn’t changing and its implications for our sector. Diane, what did I miss before we dive in?

Diane Tavenner: I think you captured it well, Michael. Just a minute more on the philanthropy aspect. The article did a good job of capturing the feeling. The conversations I regularly have with folks in education, whether in nonprofits or school organizations, or anyone in the ecosystem who relies on philanthropy for their initiatives or operations, there’s a real sense of worry, stress, and fear. There’s a belief that there is less philanthropy available, and it’s confusing what is being funded, if it’s going to be there, and if long-term philanthropists will stay in the sector. This is a big conversation happening all around. Stacey, you’re in this a lot. Many people look to you as a whisperer in this space. Is that capturing what you’re experiencing?

Stacey Childress: Yes, it is. There’s a lot of uncertainty. Michael, you asked if these foundations routinely change their strategies every five years or so. Is that what’s going on here? We can talk more about that trend, but this feels different. What I’m hearing from people raising money is not just the uncertainty of where we’ll head next and what priorities givers will coalesce around, but whether they will stay in this field at all or continue funding at the same level. If you were giving $300 million a year, are you going to pause and then go to $100 million instead of $300 million? That shift pulls a significant amount out of the philanthropy market. If you were giving $100 million a year, are you going to reduce that, and what are the new priorities? The feeling is different. I had a concentrated period of fundraising from 2014 to about a year ago, and it didn’t feel like this. We always shaped the priorities of the big givers, knowing they would do a strategy refresh, but we never worried about the money going away. In fact, we were confident we could bring more dollars in. It does feel different now, and I’m glad I’m not a fundraiser at this moment.

Michael Horn: Well, with that context, but also a bit of sobering context, let’s dive into the first question. Diane and I have a bunch of things we want to ask. Can you give an overview of philanthropy in education? What are we talking about in terms of dollars? To the extent you see it shrinking, can you quantify that a little bit so we have a sense of what and who we are talking about?

Overview of Education Philanthropy 

Stacey Childress: I feel a little exposed. You called me an expert, and I have to say two things about that. One, I have an arrangement with McKinsey & Company as a senior advisor. This is not informed by my work with them, nor does it reflect their views. This is Stacey Childress: speaking personally. 

Michael Horn: But let’s put it in context. New Schools Venture Fund, obviously raising dollars and giving, Gates Foundation, you were Next-Gen Stacey, right?

Stacey Childress: Yes, I was Next-Gen Stacey.

Michael Horn: Even with the book that Rick and I did, you wrote that incredible piece around the role of philanthropy in markets. So, you’ve thought a lot about this.

Stacey Childress: Yeah, I have. I just wanted to make sure that I know a lot. So I’m not trying to be falsely modest and say I’m not an expert.

I have expertise in this area, particularly in a very concentrated part of the space. But nothing I say today has anything to do with what McKinsey would say about this stuff. It’s not related.

It’s been a while since I looked rigorously at the shape and size of this part of the philanthropic capital market for education. I can tell you what I know firsthand and how that may or may not have changed over time. When I joined New Schools in 2014, it was the first time I had to raise money after giving it away. I wanted to understand with a lot of specificity what that philanthropic capital market looked like because I wanted to get more of it for New Schools. I wanted to increase our share of that wallet if it wasn’t going to grow.

At the time, education innovation and reform philanthropists were giving a little over a billion dollars a year. So it was about 1.25 billion dollars of philanthropy to things like charter schools, charter school networks, some ed tech stuff, human capital initiatives like Teach for America, new leaders for new schools, and similar projects. There was about a billion plus dollars in philanthropy.

My sense is that it stayed pretty stable the whole time I was at New Schools. Over about an eight or nine-year period, we stayed at about a billion and a quarter as a sector.

I’m talking only K-12 and only the innovation reform wing of funders. Think of Gates, Walton, CZI, Schusterman, Dell, and similar players. There’s a kind of an East Coast, West Coast, and some middle of the country folks. That group stayed at a little over a billion, with some comings and goings within, but overall about the same.

My sense is that that’s still true. It might have ticked down just a little bit, but I could be wrong about that since it’s been three or four years since I’ve taken a firm look. Even with the pandemic shifts, that’s still what we’re talking about here.

Now, it sounds like a lot of money, and I don’t want to diminish it. It is a lot of money, especially once you’re in the billions. The thing is, I learned this the hard way, but it’s something you learn as you go.

A big question for philanthropists, whether you’re in this sector or any individual philanthropist, is that Gates was and I think still is the biggest K-12 funder of this type. They’ve stayed in the 300-350 million dollars a year range. So about a billion plus every three years, just Gates, about 1.2 or so billion, 1.5 billion every three years. But the public funding for K-12 education has grown from about 600 to 800 billion a year over the last few years. That’s 1.8, almost 2 trillion dollars every three years.

So you match up Gates’ billion plus dollars every three years against government funding for schools at over a trillion and a half. It’s vanishingly small.

It’s a lot of money, but in the grand scheme of things, not so much. The goal is to create the most impact possible in a sector that has enormous funding and is in vast need of improvement. How do you put those dollars to work in a way that, even though they’re small, they have an outsized effect on improving student outcomes, access to opportunity, and those kinds of things?

So it’s a lot of money, but in the grand scheme of things, not so much. How do you get that wedge of innovation capital in? Diane, it looks like you’ve got…

Diane Tavenner: Well, Stacey, I think this is such an important point for this conversation because I want to make sure people know what specifically we’re talking about. I think you’re really zeroing in on that. This conversation is about philanthropy that isn’t generally funding operational funds. That isn’t to say that philanthropy isn’t out there.

There are a lot of individual donors and people in communities who give money to their favorite nonprofit, schools, charity events, and galas. We’re not talking about any of that money here. We’re talking about a relatively small set of substantial foundations giving specific types of money for specific purposes, not for ongoing operations.

So let’s spend a minute on what those grants look like when that money comes in. What do they not look like, perhaps?

So people can be really clear.

Stacey Childress: Yeah, that’s great. So, yes, that segment of donors we’re talking about funds innovation. Whether it’s startups or existing organizations in this ecosystem, they fund innovation鈥攕tarting something new, creating something new within an existing structure, or radically changing the way something is done. 

Innovation capital and growth capital help when you’re on to something, have good results, and want to serve more kids, train more teachers, or expand your core business. This kind of capital can help you grow and do more in more places or with more people. The hope is always that this will lead to sustainability without ongoing funding beyond what you receive per pupil if you’re a school or a program that gets money through taxes for serving students, or through earned revenue.

If you’re more of a service-based nonprofit, you need to figure out who and what you’re going to charge to continue operating without a constant philanthropic subsidy.

Diane Tavenner: Yeah, we always call it growth capital. We would call them bridges versus piers. You’re not just putting someone in the ocean; you’re building a bridge to something sustainable, hopefully new, better, and scalable.

Stacey Childress: Yeah, exactly. The size and time frame of these grants vary.

Diane Tavenner: Yeah, obviously, it depends on what you’re doing, but it’s rare for one donor to fund your whole need. If you’re an operator, you have to think hard about that because you probably don’t want that. It sounds easier to get one big check, but it’s actually good to have a mix of revenue or investment capital with multiple investors. This dilutes the power and governance of any one investor.

Stacey, you’ve raised a lot of money too. I like having several investors because it allows us to do what we committed to our donors without answering to one set of priorities or perspectives.

In this space, you’re usually looking at multiple donors to fund what you and your team want to do, whether it’s innovation or growth. These grants are usually three years.

Diane Tavenner: Sometimes.

Stacey Childress: Yeah, sometimes they stretch to five, but often it’s a year at a time. You do a little bit, get a little more, do a little bit, get a little more, which can be quite dynamic. There are expenses associated with this that aren’t necessarily yearly. You’re usually investing in people.

Diane Tavenner: Yeah.

Stacey Childress: To get good work done, so payroll is always a consideration. It’s a good discipline. Three-year grants were common. I had a very small number of five-year grants, which were amazing but hard to get. Very rare. A lot of one and two-year grants.

Diane Tavenner: A lot of ones and twos. If it’s okay, I can put a little shape to this in terms of dollar numbers from my time at New Schools. We launched another fund while I was there. Between 2015 and 2022, I raised 550 million dollars, about half a billion, in seven or eight calendar years. Two hundred million of it was on five-year grants. For New Schools, the other 350 million had nothing longer than three years.

Stacey Childress: But we only raised that from about 15 donors. I had multiple donors, but still very concentrated.

Diane Tavenner: Yeah.

Stacey Childress: Any one of them stepping off would have been a risk, but we kept renewing them for almost nine years. The risk was always there that one of our multi-million, multi-year donors would decide we weren’t for them anymore, they were reducing their education spend, or they could do it themselves without needing us. It was a constant process of selling what we were up to and our ideas during the three-year terms because we always wanted to renew.

Diane Tavenner: I think it’s useful to reiterate that you raised all that money to give it away thoughtfully to operators. There are two groups: one raising money to deploy it to operators and another group, like me, raising money from both you and directly from big donors. It’s a lot in the weeds, but hopefully, it’s helpful to understand what we’re talking about. Michael, maybe we should return to you because you’re wondering if this is different from the past.

Michael Horn: I think that’s the question. When I was running the Christensen Institute and raising dollars, the Gates Foundation would change strategies every five years. Is the current moment different from other times in the field when we’ve seen similar shifts, or why are people asking these questions right now?

The Impact of the Pandemic 

Stacey Childress: Yeah, I alluded to this earlier. Let me get more specific about this current moment and the difference as I see it.

Michael Horn: As you perceive it, yeah.

Stacey Childress: Yeah, as I perceive it. Somebody ought to do a really good analysis of this, an actual bottom-up analytic project to sort this out.

But here’s where I think we are. The pandemic was an exogenous shock that threw us all for a loop and put us back on our heels. None of us knew what to do during those early months of the pandemic in 2020, trying to figure out how things would sort out. 

You know me. I’m generally an optimist, a sarcastic optimist if that’s a thing, but I really am an optimist. I always think we’re going to figure this out and things will work out.

During that time, I thought, this will be a wake-up call for all of us in philanthropy in two ways.

One, if we reflect back, are you kidding me that this is really where we are in March, April, May of 2020? We couldn’t even get kids learning at home effectively with decent digital content. I was devastated. I was next-gen Stacey at Gates Foundation, and we envisioned kids learning anytime, anywhere, in deep, rigorous, and engaging ways, and that learning should count even if it’s not in the classroom.

I still believe all that, but here we were, unable to do that on any kind of scale. There are lots of reasons for it, but I thought this would be a wake-up call because maybe we’ll have another pandemic, or at least the mindset shift to anytime, anywhere learning is valuable.

The other thing was, as a philanthropic sector, I hoped it would shake us out of some bad habits, or at least some standard operating procedures that don’t serve children or grantees well.

Michael Horn: Can you give a couple of examples?

Stacey Childress: I was part of two different coalitions of philanthropists that met often on Zoom during 2020, trying to sort out what we should be doing. A lot of energy and good intentions, but no principles, just staff people. Many were heartbroken, stymied, and frozen because their ways of doing business were no match for what was needed. They couldn’t provide the size of grants or the flexibility that operators needed to respond quickly.

Operators needed resources immediately, especially those with a vision for how to respond. Their current budgets didn’t allow for it, or they were doing something new and needed the money right away because kids were stuck at home, not learning.

I had off-the-record conversations where people said they couldn’t move fast enough or weren’t set up to respond quickly. I told them they could, but they had to lead and make the case to their principals or decision-makers. We had to throw standard procedures out the window, at least temporarily, to respond to the crisis.

Some institutions equate time with rigor, thinking a long process means rigor. But often, it means 15 people have to look at something, and it takes months when three people knew everything needed in the first month. Grants could have been made in a month instead of six or eight months.

I’ve seen this as both a fundraiser and inside the world’s largest education funder. Things just take too long, and I don’t see that changing. Some figured it out on an emergency basis but have reverted to standard procedures, possibly with new organizational charts and consultants. It still takes a long time.

With these shifts, Michael, people are getting stuck mid-process and can’t get good information about what happens next. The staff inside these institutions are unsure of what will happen next, trying to respond to their decision hierarchies, leading to stalled processes.

Stacey Childress: I know someone working on a multi-million dollar, multi-year grant that should be a renewal. There’s no unknown about the grantee or the work, but it’s stalled due to internal churn. They need the money last month and thought the first payment would be made then, but now it鈥檚 stalled for another six or eight months with no visibility into what’s happening.

I feel like I’m rambling, but there was a moment where we could have shaken off standard operating procedures. It was clear that even with good ideas, we haven’t funded them at sufficient levels, smartly, durably, or for long enough to get where we need to go. Part of that is about how we do business. Could we take this moment to throw out old processes and reinvent them to be more responsive? We’re funding innovation and growth, but this isn’t how innovation and growth investing happens in other sectors of the economy. It’s just not. 

Sorry, I have one more thing to say about the pandemic lessons.

Diane Tavenner: It’s interesting to have this conversation, and it’s surprising to me we haven’t had it before. I’d love to share what I was experiencing at that time. Michael and I started the podcast because, like you, we were optimistic that the pandemic would create an opportunity. We hoped people would see what was wrong not only in philanthropy but in how schools were being operated, offering a moment for change. And here we are, season five.

Reflecting on it as an operator, everything you’re saying is right. People don’t understand how expensive it was to survive during the pandemic as a school system. The amount of money we had to spend on tests, masks, computers, hotspots鈥攅verything was immense.

I would argue that Summit was one of the best in the country at getting things up and running effectively, just as you described, Stacey. I had to make some tough decisions, extending ourselves and thinking the money would come in. Interestingly, the money did not come in from philanthropy, as it couldn’t cover the entire system. It came from the government, which moved pretty quickly, I would say.

One of the challenges is, and I’m a pretty savvy fundraiser, I didn’t know what to ask philanthropy for at that moment. We couldn’t innovate; we were just trying to survive. We had a lot of money flowing in from the government.

We did have one amazing funder, Arthur Rock, who came in within weeks, giving generously without a team or staff. His money allowed us to set up a mini-fund to help families in crisis, preventing them from being thrown out on the street, and ensuring they had necessities like a working refrigerator or internet access. It was immediate emergency cash for survival.

Stacey Childress: Yes.

Diane Tavenner: Thank goodness for Arthur enabling everyone who didn’t have internet to have a hotspot within days. But that was it. That was all that came through. Arthur has an interesting way of thinking where he doesn’t believe time will give him more information.

Stacey Childress: And he also trusted you to know the best way to deploy those resources. Arthur trusted me and my team, and that’s another challenge. As foundation staffs get bigger, they hire smart people who become experts lauded for their knowledge. They’re less inclined to just give the money to someone like you and let you do what you need to do.

Diane Tavenner: Yeah.

Stacey Childress: Instead, they take nine or twelve months to put you through a process that yields no more information than they had at the beginning. I’m not insulting the people who work in these places. I have many friends and people I respect greatly. But the institutions and the culture create processes that are inefficient.

Diane Tavenner: Same with schools, right?

Stacey Childress: Right. Same with schools.

Michael Horn: I remember this from over ten years ago. Giselle Huff was frustrated that they would hire people like you and not give you the autonomy to move quickly. It’s an organizational issue, not the individuals per se.

The bigger issue I’m hearing is that the pandemic didn’t break these tendencies; it exposed them. It created an existential crisis internally where people questioned their identity and purpose, leading to more pause and churn. This indecision has created a lingering hangover.

Stacey Childress: The hangover is still here. Gates might be an interesting exception, which I’ll come back to. Many institutions faced a crisis in the first months of the pandemic, realizing that what they’d spent years and billions of dollars on hadn’t made the progress needed.

For institutional funders, there was a sense of, “What did we get for it?” The principals, whether trustees or living donors, were asking good questions but not getting great answers from teams trying to figure it out and not wanting to be wrong. There was a fear of going back to donors like Bill Gates, Mark and Priscilla, or the Walton family with another failed initiative.

Giselle went to the president of the Gates Foundation a year after I was there and asked why they hired me but didn’t let me spend my budget freely. I wished she hadn’t done that, but it highlighted the issue. What are we waiting for? Who do we think will come up with a better answer? Where’s the boldness that created the wealth in the first place?

Shifting Strategies 

Michael Horn: Yeah, that’s a really interesting point. Let me ask the question this way: I’m hearing from a lot of nonprofits, and I sit on boards of nonprofits, that it’s as bad as it’s ever been. We’ve seen a bunch go out of business or be acquired for virtually nothing.

Maybe that’s what should have happened, I don’t know. But it seems different in many ways.

Another question I have is about the shifting strategies every five years and the churn you’re describing. Education is a space where change isn’t going to happen across the country in five years. This is a big, complicated 50-state country with lots of challenges that interfere with the operations. It’s messy. There’s a huge installed base.

Are we guilty of impatience, not just sticking with a good theory of action? Or is something else going on?

Stacey Childress: Yeah, yes.

Michael Horn: I didn’t mean to ask a one-word question.

Stacey Childress: No, I know. I was recently talking with someone from one of the large institutional donors. This person joined relatively recently, post-pandemic, and had been an outside observer and fundraiser from this institution. They had an insight that rang true for me: we’ve got a theory of change for what should happen in the sector over many years, but it’s not very rigorous or periodically examined with any rigor.

It’s shaped around the personality of the donor and some senior staff preferences. It sounds fine, but then we’re applying a lot of rigor at the individual grant level, creating 47-row outcome trackers for 18-month grants. We spend months creating these, and every quarterly call with the grantee digs into line items.

But there’s no intermediate view of how the ecosystem around these grants is doing because we’re not clear about what those are. We’ve got four or five areas we’re willing to fund, but even then, we’re not looking at the portfolio. We’re not seeing how individual grants add up to those areas.

So, big idea, not a lot of rigor around developing it, and then intense rigor at the grant level. My time at Gates wasn’t quite that loose, but there were features of it, especially the one-at-a-time approach, which isn’t true rigor. It often meant lots of people, lots of rows on a spreadsheet, and many conversations, but that鈥檚 not true rigor. 

You spend five years and have three model grantees to show the principal, but you鈥檝e spent $800 million or more. The pandemic opened up good questions for which there aren’t good answers yet. 

Gates narrowed its focus to math, committing $1.2 billion over three years. This isn鈥檛 an additional billion; it鈥檚 their regular funding but focused mostly on math. This narrowing means if you were funded by Gates before but aren’t focused on math now, you鈥檙e out. This has led to many organizations no longer fitting into Gates’ funding categories. 

Diane Tavenner: Yeah.

Stacey Childress: The downside is if, after three or five years, they can鈥檛 achieve what they want in math, then what? We鈥檝e been through system-wide transformation, charter schools, standards, teacher systems, next-gen schools, and now math. If they keep switching every three to five years, what鈥檚 next? 

Michael Horn: Right.

Stacey Childress: If the next cycle doesn鈥檛 work, they might consider an exit.

Diane Tavenner: Yeah.

Stacey Childress: I know what I would do, but in that institution, now 25 years in, by the time the math cycle ends, they鈥檒l be 26 or 27 years in. Now what?

Michael Horn: That makes sense.

Stacey Childress: People worry the “now what” will be an exit.

Diane Tavenner:  Yeah, that’s what people are worried about, for sure.

Stacey Childress: And Gates isn’t the only one. I use them as an example because it illustrates the issue cleanly.

How Operators Can Help 

Diane Tavenner: Everything Stacey is saying resonates with me. Michael, what I’m thinking about a lot is our conversations about innovation. If we go back to the top of this conversation, this is philanthropy for innovation.

I won’t go into the long history we’ve had of trying to innovate within a giant, decentralized system because that is a massive challenge. What you’re talking about, Stacey, is how does anyone tackle that? Clearly, no one can tackle that entirely, so we start to narrow our focus and aim to be successful at something specific. 

I’m not going to quibble with focusing on math because, in the work I’m doing now, I see how critically important it is for the future of the workforce and the country. However, that’s probably not going to transform schools in the way the three of us want them to be transformed.

This creates a sense of angst for me because most schools in America are just doing the same old thing. They’re taking federal, state, and local money and running the same schools, with no real prospect of change.

For those of us who believe change should happen, what are the levers? How does this relatively small amount of money create the change we want?

Stacey, as we were talking through this, you mentioned a list of things you want funders to do. I thought of a list of things I want operators to do鈥攖hose who want to innovate and raise philanthropy to do it. It’s worth spending a moment on that because I think there are two sides to this.

There are things that operators, like myself and my peers, need to do to be compelling and retain capital in our space. If you’re not doing compelling, interesting things, your projects aren’t going to get funded.

First, I’ll call it “getting your conditions in order.” This refers to work done by several people, including folks at the Gates Foundation years ago, and more recently, Transcend has partnered with others to define the conditions of an organization ready to innovate. Michael, you and I talk about this all the time. You need structures and mindsets to be able to innovate. Use the available tools to ensure you have the right conditions. If you’re trying to get innovation money without knowing if your conditions are in order, you’re not primed to raise money.

Second, do you actually have innovations that others aren’t working on that could potentially move the needle? You need to understand the field and what others are doing to ensure your innovation is truly unique and impactful. This requires discipline and hard work.

When you do this, you earn trust and face less scrutiny because it becomes apparent that you’ve done the groundwork. Lastly, I have always tried to see this as a collaborative venture rather than a competitive one. My experience is that many operators fall into a competitive mindset, seeing funding as a zero-sum game. This competitiveness is counterproductive because no one can do this alone. Acting more collaboratively could attract and keep more money in the innovation space and sector.

That would be my wish list for operators.

Changes Funders Can Make

Stacey Childress: That’s very good and definitely rings true. As an operator running a fund and having to raise money, I share your perspective. You mentioned visionary leadership, and both words are important in fundraising鈥攁 vision you can articulate clearly and compellingly about what the world should look like if it worked better for young people. Lead on it. Don’t wait for a funder to have a strategy you can fit into. Lead.

Spend time socializing that vision with other operators and donors. Donors will follow a compelling vision and leadership. You and I have both seen it happen and have caused it to happen as leaders.

For the donor side, the first thing I wish they would do is just give away the money.

Diane Tavenner: Yeah.

Stacey Childress: Remember the fundamental purpose of what you’re organized to do and what you’re given significant tax breaks for: to give away the money. You’re not organized to have internal meetings, PowerPoints, memos, politics, reorgs, and conferences. Those things can help your aims but can also distract from them. Give the money away. That’s your whole job, not the coalitions and communities of practice. Those should support moving the money. 

It sounds silly, but it’s frustrating. Your whole job is to give the money away. Increase, not decrease, your giving now. What are you waiting for? If not now, when? There’s not one answer; there are many. Fund them, learn from them. Stop with the 47-row spreadsheet metrics. 

Fund the people doing the work, listen to them, believe them, recognize patterns, and fund lots of things. More gifts, bigger gifts, right now. Go. What are you waiting for? Go. Make decisions faster. 

You’re not going to fund everything. Say yes fast and no faster. As soon as you know it’s a no, tell the operator. You can’t imagine how much time and energy is spent waiting for a yes. 

Diane Tavenner: And say no fast.

Stacey Childress: Say no faster. It’s not the last day of the process that you decide no. As soon as you know it’s no, tell the operator. They spend so much time waiting for your decision, having conversations with their board and other donors, making plans. Time is huge. Tell them no fast. Yes fast, no even faster. If your processes get in the way of that, rip them down.

Diane Tavenner: Yep.

Stacey Childress: Do something different and do it now. One of the reasons this animates me so much, beyond the obvious good of getting the money into the field and letting smart, intelligent, visionary leaders and their people do what they can with it and learn from it, is that for donors who have, say, over a billion dollars in net worth, their fortunes are growing faster than their lifetime philanthropic commitments suggest they will get the money out the door. 

A few years ago, when I was in a fundraising cycle and was counting on a donor to come in at a certain level on a renewal, I got the sad news. I was trying to get tens of millions and got multiple tens of millions, but not as much as I had hoped. It was an enormous grant, something to celebrate, but I was disappointed because I had planned for more. Silicon Valley is like a neighborhood, and the donors all talk to each other. Many of them talk to me, and I knew that this person was at cocktail parties and other gatherings saying they had a billion dollars in their donor-advised fund at a community foundation because they couldn’t find enough good things to fund, including education. And they had just given me multiple tens of millions.

What are you waiting for? When I first joined New Schools and was figuring out the investment footprint before we did a specific strategy, I realized that what we had wasn’t working. It was a quiet secret in the field. The theory had run its course, and New Schools had been struggling to raise money for a couple of years. It was time to rethink it.

Someone who was a contemporary of Vinod Khosla, a Silicon Valley venture capitalist, told me that when he first became a VC, he realized something new was coming from closed network systems. It had to do with packet switching and the internet. He convinced his partners at Kleiner Perkins that they needed to fund everything in these nascent categories because they didn’t know who would win. They backed great teams and more than one in each category. This humble approach, funding lots of things with a vision for how the industry would change, led to massive financial success.

From where I sat at New Schools in 2014, I felt like we were in a similar moment. We had glimpses of what the future could look like for kids, and our strategy was to push everything onto the table for this vision. Rather than trying to find the answer, we should take a broad view of the space and fund every good team and idea.

Stop thinking that you have all the answers inside your foundation. Most of the smartest people don’t work for you. Fund, learn, and fund again. Give the money away.

I wish people would do more with intermediaries. If I were the leader of a foundation with $350 million a year to give away, I would convince my principal to give $300 million to four or five grantees in large chunks, and those would be intermediaries. I would have a staff of no more than 10 people, each managing relationships and helping us learn and adapt. Intermediaries offer leverage, expertise, and nimbleness.

Follow MacKenzie Scott’s example: big gifts, unrestricted, lightweight process, fast decisions, little to no reporting requirements. It’s not perfect, but it gets the money out the door.

Diane Tavenner: It is.

Stacey Childress: Yes, it can be tough to figure out how to get in the pipeline and some transparency issues, but those challenges are far outweighed by getting the money out the door. Let’s do it. Get that money out the door. If not now, when? Be honest with yourself. What are you afraid of from going big and visionary and moving lots of resources quickly to people doing important work?

Michael Horn: Well, Diane, as we wrap up five seasons here with our final episode, I think we finally had our Jerry Maguire moment. It’s no longer “show me the money,” it’s “give away the money.”

Stacey Childress: Give away the money.

Media Recommendations 

Michael Horn: And Stacey, you have nailed it. So with that as a segue, as we wrap up an episode, I’ve learned a lot from both of you. Thank you both. Let’s finish up with some things we are reading, watching, or whatever. Stacey, we’ll call on you first. Hopefully, it’s not Jerry Maguire, but if it is, we understand.

Stacey Childress: It’s not Jerry Maguire. Sadly, I’m still watching and listening to heartbreaking, disappointing Astros baseball, but hope springs eternal. 

A new thing: there’s a relatively old, about 10 years old, documentary on Prime Video called The Wrecking Crew. It’s a deep dive into a loose group of studio musicians in LA in the ’60s and ’70s who backed 60-70% of the big radio hits of that era. They backed artists like the Righteous Brothers, the Mamas and the Papas, Sonny and Cher, and the Beach Boys. The Beach Boys performed live, but The Wrecking Crew played on their studio albums. They were behind so many iconic songs. It’s fascinating.

Diane Tavenner: Well, this is what happens when you have an episode with two of Stacey’s passions: philanthropy and music. It’s so exciting. I agree with everything you’re saying. I hope it happens because I feel like we’re at an early 2010-2011 moment again. I hope people jump on and in. No one else in the world is ahead of us yet in redesigning their education systems. We have an opportunity in America right now, and I’m deeply optimistic.

I’m reading an early advanced copy of 10 to 25, Dr. David Yeager’s new book. I love him. He had such an impact on our work at Summit. He’s an amazing researcher who connects research with actual work in schools. The book talks about a mentoring mindset, a continuation of the growth mindset. It’s incredibly powerful and will be out in August.

Michael Horn: You’re going to have to dig in then. That sounds exciting, Diane. I’m glad you’re reading it. I’ll just wrap up mine. My kids went away for their outdoor nature’s classroom for a few days, so my wife and I went to New York City and saw a couple of shows. We saw Merrily We Roll Along, which I highly recommend, and Enemy of the People. Both were terrific. 

Like you, Stacey, I’ve been watching a lot of sports, but the Celtics are having more success than your Astros. I recently finished Outlive by Peter Attia. It was great, with a few new tips, some things I already knew, and a lot of common sense.

Stacey, thank you for joining us and enlivening the last five episodes. We’ll see where that goes. Diane, as always, thank you for the partnership. For all of you listening, thanks for joining us for five full seasons of Class Disrupted.

]]>
Reimagining School: An Expert鈥檚 Take on Unbundling the Core Education Experience /article/podcast-expert-stacey-childress-talks-about-rethinking-the-way-we-teach-and-evaluate-students-and-unbundling-americas-education-experience/ Tue, 04 Jun 2024 21:30:00 +0000 /?post_type=article&p=727885 Class Disrupted is a bi-weekly education podcast featuring author Michael Horn and Summit Public Schools鈥 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or.

Michael Horn and Diane Tavenner welcome back Stacey Childress, Senior Education Advisor at McKinsey & Co., for the second episode of a two-part series on the challenges facing K-12 education and promising strategies for addressing them. In this episode, each of them makes the case for one high-impact reform to address the challenges laid out in the previous episode. They discuss reforming how schools evaluate and recommend students, unbundling the core education experience, and doing more to instill character in values through education. 

Listen to the episode below. A full transcript follows.

Diane Tavenner: Hey, Michael and Stacey. Wow.

Michael Horn: You got to say hi to both of us. This is fun.

Stacey Childress: Hi, Diane. Hi, Michael.

The Two-Part Series on K12

Diane Tavenner: Good to be back together with you two. This is part two of a two-part episode the three of us are doing together. The premise for this episode started when we did a two-part episode previously around higher ed, and some of our devoted listeners and folks said that they enjoyed it so much, and they encouraged us to do something similar for K12, which we are doing. So this is our second episode, and it’s so much fun to be back together with the two of you.

Michael Horn: Hopefully, our listeners are not regretting that request after listening to the first part, but we’re going to be briefer this time. It’s our resolution.

Stacey Childress: Yeah, we even wore ourselves out on episode one of this series. So, yes.

Diane Tavenner: Just to remind folks, if you haven’t heard it, part one was identifying the elements of the K12 system that are the core elements and then identifying the problems with them right now. That’s all to lay the foundation so we could propose solutions. Since we recorded the first problem episode, we’ve had some good conversations, the three of us, and really pressed each other about how we wanted to approach solutions. We ran through a bunch of different options. But I think the one we got most excited about, and where we ended up landing, is rather than trying to go through a laundry list of all nine elements. Because it’s expansive, if you listened to the first one, you had to hang in there for quite a long time with us. We decided that we would each pick one of the nine to work on solutions for. And it turned out we all picked different ones.

So I think the approach we’re going to take today is to make our case for why we would try to solve the element that we’re picking, how we might solve it, and what solutions might be in the world already that are attempting to solve it. And in that, is there a way to unbundle it from the others to make it more possible? The other two of us will react to that and see if we have anything to add. Does that sound right?

Michael Horn: Let’s go forward with that as a plan. Diane, you get to go first, so you model what this looks like for us.

Diane’s Proposal: Reforming Schools鈥 Evaluator-Recommender Role 

Diane Tavenner: All right, well, I’m happy to go first. I suspect some folks might be taking some bets right now on which of the nine we chose. I am going to pick what was item number six in our first episode, the evaluator recommender. Let me just start by saying I think there is a huge opportunity. You both know I’ve spent the last several years trying to figure out what I want to do post-Summit. As part of that exploration, I’ve been searching for what I think is the greatest lever we have for change in the K12 system. I keep returning, sort of sadly and reluctantly, to assessment at the big level. I am attracted to this category because I think it’s a huge opportunity.

I also think it’s one of the easier things to unbundle from the rest of the K12 element list. I know that probably sounds counterintuitive to a lot of people because how in the world could you unbundle evaluation and recommendation? But I think with a mindset shift, it becomes pretty doable. Let me unpack three ways that I think we could do that and then share the mindset shift that would have to happen. First, when we talk through evaluator recommender and the element that schools do, they write these recommendations for colleges. There’s a huge expectation from higher ed that high school teachers and K12 will put in substantial effort to make recommendations of students. As Stacey pointed out in our last conversation, that’s for a relatively small number of students, but it takes up a huge amount of energy and time from people. I think the way to decouple this in K12 is to just stop having higher ed ask for recommendations as we know them, which are these letters. The most offensive part of these questions you have to answer as a recommender would say, “In what percentage of your lifetime experience with students does this student fall? Is it in the top one, top five?” I see you, Michael, leaning in because…

Michael Horn: This is the worst question ever.

Diane Tavenner: Worst question. Anyone who knows about the way our brains process will know no one’s capable of doing this in any unbiased way. It’s got to be the worst data ever. I don’t know why people keep asking for it. So, anyhow, I think do away with that. My invitation to higher ed would be to rethink how you’re doing admissions because, by the way, you should just rethink that to begin with. There’s better ways of doing it. And stop putting this extraordinary amount of work on K12 that is super biased and probably not helpful.

You’re probably not even really factoring it into your decision. What I would offer in exchange is, if you have to do something, do reference checks once you’ve already decided. Mirror the professional world: once you’ve already decided that you want to accept this student, if you want to do a reference check, great. Make it a simple, straightforward call-up reference check. I’m sure we all do reference checks regularly for former employees, and it can be very efficient. It would take far less time, it would be far less biased, and I think that would be a strong way to go and a change that could be made quite quickly and efficiently. I think it would be greatly appreciated by K12 on multiple levels and take them out of that role. The next thing is grades. As you all know, I have long believed that teachers should not be asked to both teach and coach and develop and grade their students for external reasons.

Diane Tavenner: Let me offer how you would provide students grades or feedback if not by their teacher. Step one: technology is actually pretty good at a lot of this, and with AI, it will get significantly better. It’s already getting significantly better at this. Put as much on technology as we possibly can. For a decade-plus, we’ve been doing this at Summit, and there’s people doing it all across the country. This is not out of reach. This is totally happening and possible and getting better every single day. Do as much there as possible.

I would argue the only type of grading that teachers should be doing is if it is a combined part of their professional development where they’re growing and developing their skills of teaching. There’s a whole methodology here, been doing it for 20-plus years around calibrating your scoring and then doing that in a group scoring. The more we have high-quality curriculum, which I expect might come up in some of your proposals later, the more the world is going. You have common assignments that this can be done around, which is a win-win for everyone. You have other teachers who are providing the actual scoring of your students. It makes the whole system better and a learning system. I think those are very possible, doable changes that could be made fairly easily and decoupled from most of the other elements.

Diane Tavenner: The final piece is around the high school diploma and the transcript. Here, a lot of people are working on a vision where the student is the keeper and the owner of their own transcript. I think this makes so much sense. More and more every day, students are learning from multiple institutions and multiple places. This is such an antiquated notion that you would go to one institution and have this transcript there. If you look at kids’ high school transcripts now, they’re already including community college and other types of institutions on those transcripts. The mindset shift is that the student is the owner and keeper of their transcript. Again, technology is our friend here.

It can be used to make sure this is validated, true, honest, and that they have the world of learning opportunities available to them that get integrated into the transcript. They control where it goes, who they share it with, and who they give it to. It’s very similar to a portfolio model and very complementary to a portfolio. It’s just the right way to think about young people and even older people having agency and self-direction around their own learning and how they’re driving it, and then what they’re sharing with the world. My last piece on all of these things is it focuses us more on evaluating the quality of the work that people have done versus someone else’s evaluation of who knows what. That’s my proposal. What do you all think?

Discussion of Diane’s Proposal

Michael Horn: Stacey? I’ll jump in first, and then you can tee off there. We’ll flip the order a little bit. No surprise, Diane. I love peeling this off from the rest of the enterprise. We’ve talked about this before. I would think about it conceptually almost in reverse order, in the sense that particularly grading and things of that nature should come before the reference checks. When you started with reference checks, I thought, that’s a lot harder for colleges to do for 18-year-olds than we might think. But if we flip the order and start with the system where the student is the keeper of their record, they’re having their performances and accomplishments validated by a range of individuals鈥攖eachers from other districts, professionals themselves鈥攎aybe actual projects for companies and organizations.

There’s real importance to what they’re doing, not pretend, but real. There’s an incentive for those professionals to give feedback on it. Using technology to help with inter-rater reliability, making it translatable, and so forth. The application then comes into a college, and they can trust it. They can say, “I’d love a double click on this.” You have a team around you of folks that have worked with you. So, I know who to call. When I imagine it almost in that way, then I start seeing how this hangs together even more.

I would offer just one last observation on this. You all know I’ve long been fascinated with Western Governors University in the higher ed world. They have a whole separate faculty who is trained just in the art and science of assessment. When you haven’t mastered something yet in their competency-based model, you don’t blame the teacher because the teacher who assessed you does not know you. To your point, Diane, it just seals that thing. They’re not evaluating something about you as the individual or a bias or whatever else. They’re just looking at the work. We can have multiple faculty members who are trained in assessment looking at the work to make sure it really represents what a great performance does or doesn’t look like. Stacey?

Stacey Childress: Yeah. I like flipping the concept of evaluation and recommendation on its head as well. I resonate with moving to a world where a student is the keeper of their portfolio of learning experiences and the evaluations of those. I wonder about which actor in the ecosystem is the keeper or provider of this different construct. Is it like at Western Governors University, where it’s still in-house, but we’re staffed up differently in terms of expertise, roles, etcetera? And in the K12 system, maybe think about the system more granularly or modularly. How does this look in the early, elementary to middle school years, and then how does it start to shift in middle school? Maybe it’s fully from an outside partner in high school, where we need to see the supply of partners who have the tools in school districts that have this kind of expertise. It doesn’t have to be built inside the system. That probably increases the validation, credibility, and legitimacy of the credential as it then goes on to the next steps in education and preparation. Diane, I’m not sure how you were thinking about that, but it’s an interesting idea to think about. How does the ecosystem shift as kids get into their teen years on their way to graduation from high school in a way that creates an opportunity to introduce new players, new expertise, and maybe increases the validity and credibility of the signal to the next step on a kid’s learning journey. But just wondering how you were thinking about that.

Michael Horn: Yeah, I was going to say quickly, quick clarification, then I want to hear Diane’s answer. You raised a good point. Western Governors would be better, in my mind, if it was an external entity playing that role. I think the reason why at the higher ed level we can’t get to competency-based education and replace paying for seat time is because no one trusts that the institution is going to fairly evaluate itself for learning. I think they’re right not to trust that when dollars are at stake. The more unbundled this can be, the better it is. Diane, you can give the more thoughtful answer, though.

Diane Tavenner: Well, no, that’s super thoughtful and pulling strings from both of you. One of the things I love about this proposal is I think it helps us start to unbundle the role of the teacher, which is something we have all been talking about for a decade-plus at this point. There are people who are amazing at assessment, and they love assessment, and they think about assessment. You could unbundle those roles within an institution. That would be one way. Like you, I like it even better across institutions. When we talk about a common high-quality curriculum, it doesn’t make sense anymore for an individual teacher to be writing and developing their own individual curriculum. We should be using high-quality curriculum that is across institutions.

There’s a huge opportunity there for people from different institutions to be evaluating on the same projects, the same work, etcetera, across institutions. I do think, and I’m personally involved with a number of them, some I can speak about, some I can’t, efforts are underway to build nonprofits and for-profits that have the ability to do these evaluations. The ones that I think are most exciting are on-demand for students and families. No matter where I’m learning, I’m able to go to a place where I can validate the skills I have, the knowledge I have, and the work that I can do. That way, I am not handcuffed to my zip code and the one institution that may or may not be gatekeeping me on multiple levels.

What this does to the psychology for families and students about what’s possible, it undoes so many of those negative effects we were talking about yesterday in these other groups where the system is not actually doing what we wanted it to do. We’re not going to touch on that particular element today, but I think we are because this is a powerful solution to fulfilling that number nine, that dream, that promise. If you work hard and drive your own learning, there are ways that you can show that and truly benefit from it.

Stacey Childress: Yeah. I love that.

Michael Horn: Should we dive into the second one?

Stacey Childress: I think it’s probably an interesting segue into my choice, which was number one, just that core education experience. It was at the top of my list. If I had to pick from our 17 or 82 on our list, however many there were. Twelve, nine. So, just to remind folks, this is like, when we think of school, we think of these things, right? It’s the core educational experience. Historically, it started with the three Rs: reading, writing, arithmetic, and lots of other subjects have been added over time. It includes the strength and breadth of the academic program and the social learning. It’s different than social-emotional, but like, how to be part of a community, what’s it like to be in a group, in a class, in a team, your people. It also includes those social aspects of managing yourself.

Stacey’s Proposal: Unbundling the Core Education Experience

Stacey Childress: On top of that, extracurriculars, sports, interest-based activities鈥攁ll of those experiences we consider part of the education of our kids. We said a challenge with it was often what we teach and how we teach it is not aligned to the current science of learning. What we know about how learning happens and what makes for a good, integrated set of learning experiences, but also towards what end. Our second challenge is a lack of vision and purpose. We have these large cafeteria menus at high school and a broad waterfront of concepts, skills, and topics that we ask elementary schools to cover. But the “to what end” has gotten lost over time as we’ve added more and more. That was one of our main critiques.

Following our model here, I thought first about whether this core academic function could be unbundled. Diane, you started to talk about how unbundling the evaluation and recommendation piece might open up more opportunities to start unbundling the actual core educational experience.

If you were able to demonstrate your learning outside of the mandated tests at the school or state level, maybe you could have more options for how to get that learning, how to experience it, and prove it to an outside provider. Another thing that would have to shift is policy, which was number five on our list. Policy would have to be in play to create some of the shifts we see. Along with evaluation, funding policies would need to shift. There are efforts in states about this, which can be quite controversial and politicized. But for unbundling the core function to work at any scale in a community or region, along with the evaluation function moving to something external, the dollars would have to come to families. Not just follow students to their chosen place, but actually be in the hands of families to spend on educational services.

These types of programs, such as traditional voucher programs and education savings accounts (ESAs), usually go to a bundled school experience. They are not driving the unbundling of the core educational experience in any way. I am an informed, interested observer, but because these policies are not driving the unbundling of the core educational experience now, it makes me wonder what would have to happen. It also makes me a bit skeptical that these policy solutions will lead to an unbundling of the core experience. 

Let me say a little about why I think that is. There鈥檚 a bit of a chicken-and-egg situation. There aren鈥檛 sufficient choices for families to take advantage of in core educational opportunities. That includes the core academic experience and character-building experiences, the social learning aspect. Even if I got my money directly from the state, I don鈥檛 have enough options to spend it on in sufficient quantity to choose among them. I am likely to choose a bundled experience that is better than what I had but may not allow me to unbundle.

Unbundling shifts a lot of non-financial costs to families. If I don鈥檛 have that bundled experience to go to, I am responsible for putting things together. I might not have the time or interest in doing that, even if I do have the resources. You can imagine other providers growing up that could play that orchestration or concierge role among some online experiences and some local, regional, and state providers. That鈥檚 super interesting. The biggest barrier is it flies in the face of our concept of school as the place we go, where our kids go, and where we get everything we need or most of what we need. But there鈥檚 something compelling about the idea.

As more choice options emerge in states where there is a financial and policy component, the long-term aspiration of what it could be if we unbundled evaluation, unbundled the money, and had some incentives in the communities for the options to arise based on the science of learning, are clear about what vision they’re educating against, and maybe have chunks鈥攎aybe I鈥檓 not getting reading here and math there and character here鈥攂ut maybe I鈥檓 getting those bundles from a provider and also have options for sports leagues, which already exist. A lot of sports leagues, children鈥檚 theater, and those kinds of interests and extracurriculars show much more promise.

What does that hybrid look like? Where we鈥檝e got some bundles validated with the science of learning and an external evaluator? I am more optimistic and less skeptical about that. So, that鈥檚 my unbundling piece in the bundled environment. I think we鈥檙e seeing some interesting things. Diane and I are on the board of an organization we helped start called Transcend Education. We worried about communities not being engaged in the vision of schooling. Transcend has this amazing process that takes whole communities through to create or unearth the values, wishes, dreams, and intentions of a community against what an educational experience should aim for.

They have built expertise around processes to be on a journey of reinventing your schools and your system of schools in ways that align with that vision, so schools and districts aren鈥檛 on their own trying to do that piece. It鈥檚 still a bundled experience. The work they’re doing in Texas with lots of districts, for example, Aldine Public Schools, which has 60,000 students and 80 schools, and 90% of the students are economically disadvantaged. There鈥檚 this beautiful community-wide process with the help of Transcend as an expert partner.

I鈥檇 love to see more Transcends, more capacity for Transcend, and more Transcend-like organizations that can work with systems and schools in their communities. We still need more opportunities for school creation. Diane, you know this better than any of us. When you can have that conversation with a community and create a new school that lives into that vision, is based on the learning science, and isn鈥檛 trying to do everything but has agreement on the core things they will do across core academics, character building, and interest-based activities, you鈥檝e got a lot more likelihood of achieving coherence. 

I am distressed by the reduction in new school creation around the country, both with philanthropy and policymakers. In the last 20 years, and even in the eight years I was at New Schools, we helped enough new schools come into existence to serve as many kids as the San Francisco Public Schools and the Boston Public Schools. These interesting models meet community needs, create great results for kids, and have more ability to do it because they鈥檙e not burdened with the layering that has gone on over the last 100 years or 40 years or 30 years. I鈥檝e been talking for a long time, so I鈥檒l pause. But we need a vibrant mix of opportunities so more unbundled services can arise, so districts can undertake this with expert support, and we still have new schools opening up that meet these aspirations and provide examples of what鈥檚 possible while serving their communities.

Discussing Stacey鈥檚 Proposal 

Diane Tavenner: Wow. There鈥檚 so much in there. Let me try to pull out a couple of things. I resonated with all of it. One thing I feel is this tension for families. When we talk about family choice and parent choice, there really is only choice at the bundled school level for the most part. That鈥檚 as far as we鈥檝e truly gotten.

It鈥檚 like you can either pick a whole school for your child, or you can be a homeschooler family. In that case, you鈥檙e responsible for everything. Over here, you still have to curate a lot because the school doesn鈥檛 generally work in the summer, so you have to curate the summer. Oh, by the way, the holidays don鈥檛 match your workdays. It feels a little more steady, so that is very limited choice in my mind. I love that you鈥檙e proposing a more doable choice if it鈥檚 on a continuum, something more in the middle of this concierge model, these new entities. I think this is an interesting space for new entities to come into where they have a different mindset.

They want people to be able to assemble what works for them and make that easy and doable, without putting the full burden on a parent. Most parents I know have spreadsheets to try to manage summer experiences alone. By the end of summer, I was exhausted. Just put me back in school, even though it鈥檚 8:00 to 3:30, because at least that鈥檚 consistent except every other Friday and the holidays, whatever. You know my rant about this. I love that idea paired with ESAs. These are very controversial right now because they鈥檙e happening quickly. I think we鈥檙e up to maybe eight.

Michael Horn: 14 or 15 states, I think.

Diane Tavenner: Okay. Who have these in motion. There鈥檚 probably another ten that are working on them.

Stacey Childress: Texas will likely happen this year.

Michael Horn: Yeah, exactly. There’s a bunch that failed last year, but after the primaries, it will likely pass.

Diane Tavenner: There are people from multiple sides of the political spectrum who don鈥檛 like ESAs and are working hard against them. The two primary arguments are, one, accountability鈥攈ow do we ensure kids are getting quality education, which we all care deeply about鈥攁nd two, adult reasons. They don鈥檛 want money going away from the system, which is sometimes the largest regional employer. There鈥檚 more to it than that. I鈥檓 not being nuanced, but you know what I鈥檓 saying. They鈥檙e not thinking about what鈥檚 good for families and kids. These systems are far from perfect. Policy is very difficult to write. I don鈥檛 want to throw it out because we have a couple of egregious examples of someone using their ESA money to buy a big screen TV and claiming they were showing their kids learning content on it. Not awesome. That鈥檚 not the kind of thing we want. We need to learn how we can help people spend this wisely. We need significantly more supply of good science-aligned options and help for them to assemble those options to really take advantage of it.

I hope we can keep moving forward and make this better versus trying to rip this system out. I think we had this intuition when we said we were only going to talk about three topics that we’d end up touching on many more. What I love about what you said is in this vision, it contributes to the mixing of people, socioeconomic mixing and political diversity, which we鈥檙e concerned is not existing right now.

A lot of people get afraid when people want to talk about school choice. They鈥檙e worried it鈥檚 going to cause more polarization. I think this approach has people doing more mixing because you are picking and choosing and engaging with other people. It goes to that big societal intention and hope of our system if we can stick with it and figure it out. What do you think, Michael?

Michael Horn: Yeah, I agree with what you just said. I鈥檒l unintentionally come back to this when I tackle my lever. On the mixing point, when you have dollars that can unbundle the school experience in the way described, you lower the stakes on picking the thing. My guard comes down. I鈥檓 worried less about the mix of kids around me and the parents. It becomes a more optimal choice for something different now in these different experiences that contribute to what you just said, the different mixing.

I wrote a piece on how we shouldn鈥檛 expect a great unbundling right away. In all markets, customers initially prefer highly proprietary, interdependent bundled offerings because they don鈥檛 yet know their preferences and customization they want. We don鈥檛 have any experience as a society for the most part outside of homeschoolers and increasingly hybrid homeschoolers in picking and choosing and thinking outside of a school frame of reference.

It鈥檚 not surprising that you look at the state of Florida with its education savings accounts. The majority of those dollars go to full school tuitions. What鈥檚 interesting is if you look at Florida over time, fewer dollars are going to tuition. I had a conversation recently with someone in Utah, and they were seeing the same trend. That鈥檚 starting to change. The big thing is now we need the supply side of the market to catch up. We need more good school operators in there.

We need more concierge-type services and more one-offs in the ways we can imagine. What鈥檚 exciting is I don鈥檛 see a way to incentivize what Diane was talking about in her first point unless we go in this direction. Otherwise, you鈥檙e asking a school to somehow pay out money to an external validator. They鈥檙e not going to want to lose those dollars. If it鈥檚 the kids and the parents saying, “I want to validate that Michael learned how to do X and show evidence of it,” and it鈥檚 dollars that I get to control in a wallet, it鈥檚 greatly preferable to vouchers or tax credit scholarships, which I don鈥檛 think accomplish any of what we鈥檙e talking about.

Stacey Childress: So you鈥檙e saying ESAs as a preference?

Michael Horn: Strong preference. I think the other two are not. They do several things wrong. They don鈥檛 force me, as the individual, to think about value trade-offs in terms of saving the money for different offerings. When I think about Diane鈥檚 vision of separate places to validate what I鈥檝e mastered or learned or accomplished, you can imagine in the professional world, there鈥檚 the CFA, CPA. There are longstanding credentialing bodies that we pay for to show mastery. 

You can imagine a flourishing of supply-side options that start to do the same thing. Colleges, employers, apprenticeship programs start to say that鈥檚 a valuable signal. That鈥檚 how we start to get around some of the accountability concerns in the longer run, by this flourishing. We have talked about the challenges with philanthropy in this country. We may find a time to come back to this topic. This calls for real patient capital to seed this marketplace and acknowledge that it鈥檚 not going to all come together at once and be comfortable with a messy transition as we get there. Diane gave one example of messy, where there鈥檚 going to be some bad spending, as though that never happens in districts today. There will be a messy transition of us trying to figure out how to do this in a way that doesn鈥檛 overstress parents and comes together. It鈥檚 not going to be an overnight process. It鈥檚 very grassroots, what you just described.

Stacey Childress: Yeah, it’s interesting. We’ll kind of wrap up on this one based on your reflections, both of you. I do want to say I think I might be a little more skeptical than I hear the two of you being about our shared ambition for socioeconomic diversity and racial diversity in the choices that emerge. I often say, if I had more confidence in my fellow man, I’d be a libertarian. If I had more confidence in my government, I’d be a liberal. If I had more confidence in my church, I’d be a conservative. So I actually don’t know where I fit on all of these.

I’m not sure. I think where I get a nagging sense that the critics are likely right about this is that I don’t know if, left to our own devices with ESAs as currently conceived in the policy frameworks, we’re likely to get less isolation rather than more. If I had to lean one way or another, I’d say we’re not likely to get more equity. I’m not certain about that. It could happen, but I’m not certain in the current climate and conception. But I do think it’s interesting to consider ESA policy provisions that don’t squelch their vibrancy and goodness but include some thinking about the great American experiment. It could be an interesting addition to the thinking.

Michael Horn: It’s a great point, Stacey, and I don’t think Diane or I want to sound pollyannish on this. I’m putting words in your mouth, Diane, but I guess what I would say, and increasingly have felt, is the current way we’re doing it isn’t accomplishing it. So I’m willing to take a gamble.

Stacey Childress: Yeah, totally. No, I’m not certain. You guys know me. I’m not defending the status as better.

Diane Tavenner: No.

Michael Horn: I think it’s an important caveat, though, that you introduced.

Stacey Childress: Yeah.

Michael Horn: Yeah.

Diane Tavenner: I think this is a nice segue into, Michael, the element you’ve picked to unpack and provide hope and solutions for. But I just want to mark, I feel like the three of us should take an action item out of this conversation so far. We have this privilege of engaging with a lot of, whether they be your students at the university level or young people, at least younger than us, who are very entrepreneurial and ambitious. There is such significant opportunity right now to conceive of new nonprofits or for-profits to create the supply that is so needed here. So I think we should all take, not that we don’t already, but even extra care in nurturing and encouraging that type of entrepreneurship going forward. I just gave you an action item, Michael.

Michael’s Proposal: Teaching Character and Values

Michael Horn: The best meeting is one where you assign someone else to work. Okay, so let’s jump in. She’s good at it. The one that I picked was the character values bucket. It was our second bucket yesterday, and it was, to use Diane’s words, more macro than the social bullet that fell under the core education that, Stacey, you just tackled. To remind people, there were three big pillars we talked about yesterday. One was the basic norms and values of living with other people in society together, preparing people for adulthood.

So something we often call habits of success. I’ve adopted Diane’s language on this. Character, though certainly in the now sunsetting Character Lab, has used that phrase to encompass a lot of these characteristics. And then thirdly, being a participating member of a democratic society. The observation I made is that the public school system in many ways got its start around this particular purpose of inculcating, and I’ll use that word intentionally, democratic values in the populace. The first question, can it be unbundled? I’ll lead with what, in a lot of our worlds, would be the controversial statement: of course it can, because parents are the first teachers. There’s that observation, but that’s not where I want to sit with my thoughts, because I know a lot of families, and to your equity concerns, Stacey, that’s not the entry point.

Where I want to go is a different starting point. Yes, that’s part of this possibility and part of the fabric. But what I want to say is, in our conversation yesterday, the flip side we observed is that while there’s significant polarization and arguments against certain character education, there’s actually a lot of commonality in the populace around what we agree the centerpieces of these things are. I can’t remember the exact number I said, but there鈥檚 a lot of agreement. It’s interesting that in education savings accounts, there’s a lot of agreement at the population level that they’re popular. It’s just the politicians that don’t necessarily agree, which is interesting.

My observation is that there are two ways to approach creating a common set of democratic values, civic values, and values of how we conduct ourselves in a society with people we may or may not agree with. One is a top-down approach, almost like the Common Core approach, which aims to get alignment. The challenge I’ve observed is you get a lot of energy around what鈥檚 in and what鈥檚 out, and you get a lot of anger on either side that often erodes consensus. The controversial point I want to push forward is that if we took an unbundling approach, very much like what you said, Stacey, in our previous conversation about how each school community comes together and has this conversation around its purpose, and we trust that most Americans have these central values they want their kids to learn, we can get 80% of the results with 20% of the effort. This might be the most productive way to move us forward on these things we really care about in a grassroots way, rather than spending 80% of the energy trying to get the 20% to fall in line. 

I get it, it doesn’t solve everything, but we’re not solving everything at the moment either. An 80-20 rule that takes some of the tension out of the culture wars would be a really important way to go. I think education savings accounts are an interesting way to approach this. I can start to opt into school communities, and I’m going to trust that families are going to make choices where they’re making sure that, for the most part, 80% of the population is saying, “I want my kids to understand the promise of the American dream, acknowledge the dark parts of our history, and strive for a more perfect union.” These values are integrated into these experiences.

I think this approach will open us up to a lot of innovation in terms of form factors and how it integrates. I really like your observation, Stacey, that we’ll rebundle the content with the character as we unbundle other things. One question I’d love you both to reflect on, in addition to the stuff you react to, is that starting with Diane’s point, we’re going to do a lot for increasing agency in this country. We’re going to do an incredible amount, and that’s really important to thriving and having people feel better about themselves. I think the two questions we should worry about and think about are coherence among experiences, which goes to the concierge, but also content and things of that nature. 

The second question, which has been on my mind lately as we’ve watched things unfold across college campuses, is how we embed a sense of humility in kids. How do we make sure they know they’re still learning and don鈥檛 know everything? The one nagging worry I have is when I see so many great interest-based school communities thriving, kids are picking things they’re excited about. But when is the thing that says to them, “You don鈥檛 know X, and that鈥檚 okay”? Are we modeling things that introduce some uncertainty where they get the feedback that they can do, but also the humility to say, “I don’t know everything”? I don’t know if that’s well articulated, but that’s the one thing on my mind at the moment. I’ll kick it to you all for reactions.

Discussing Michael鈥檚 Proposal 

Stacey Childress: Go ahead, Diane.

Diane Tavenner: Okay. Still processing those questions. As you were talking, Michael, and listening to this whole conversation, here鈥檚 what鈥檚 coming up for me. First of all, I can imagine what you’re proposing, because like Stacey said, Transcend does this work. I did this work with Summit Learning for a number of years. I had the privilege of working with communities in just the type of experience you’re talking about. It was fascinating and amazing.

Diane Tavenner: Communities really did come together and identify what they thought the purpose of education was. There was huge agreement, and it was a powerful experience. I could imagine this, and I’ve seen it with Transcend and others. What was coming up for me is we’re at a point in time where the public has lost trust in most institutions in our country. Trust in institutions is at the lowest level we’ve seen in a long time. I hear this all the time, “I don’t trust, I don’t trust, I don’t trust. You don’t have my trust. You’ve broken my trust. Trust, trust, trust, trust, trust.” In my experience, the only way to build trust is to do meaningful, authentic work together, which builds trust. People often say, “We have to communicate better to build trust.” I don’t believe that at all. Communication is important, but it is not the pathway to building trust.

It’s truly working together and building relationships over meaningful work. This is such a powerful idea that every school community can do. Every school community in the country is doing some sort of community engagement, whether through their accreditation, strategic planning process, or federally or locally mandated committees of parents that do work. Most of the time, that is not meaningful, authentic work that builds trust. It is box-checking, perfunctory, rubber-stamping. What if we took those existing opportunities and flipped them into true dialogues and consensus-building around what the purpose of education is? What do we actually share together, and how are we going to build that? I think that鈥檚 a very doable thing within the existing system that would go a significant way towards the vision you鈥檙e talking about and building the trust we need. Let me pause there with my reaction and turn to Stacey. I will gather my thoughts around your good provocative reflection questions.

Stacey Childress: Yeah, and Michael, I want to pick up on your powerful insight about the challenges with top-down approaches at any level, but especially at the national level. They are destined for disappointment. Even though I joked about different political philosophies, I trust people with their own choices, especially parents making decisions for their kids and families. Since I joked about it, I want to make sure that鈥檚 clear. What I love about what you said, Michael, is because we trust that, and because we know top-down approaches are probably not going to be all that good anyway, and we’re allergic to them as Americans, where real trust is built is on the ground, doing meaningful work together. If we give up trying to get national consensus, we’re going to get it at the ground level. Where people are together every day, showing up at school or other educational options, in the grocery store, in their churches, and at community activities, they agree on 80% of important things.

If the locus of shifting to a vision of learning and education that works better for kids and sets them up for long-term community living, self-sustainability, following their dreams, and being strong and productive members of our democratic society, starts where they live today, tomorrow, and 20 years from now, where we actually experience all the dynamism of being part of a pluralistic society and a functioning democracy is in our neighborhoods. I love what you said, Michael. If we ever do have the conversation about philanthropy, I think this is where we miss big time. We’re looking for scale and things that can work everywhere, but scale is healthy communities doing strong work together. That leads to clarity about shared values and a vision for how to help the next generation build towards those values. As Michael said, “Yes, I’m capable of everything, but right now, I don’t know everything.” What are the habits of mind, skills, and habits of success that lead to that possibility at the micro level for every young person, at the building level for every school, at the community level for groups of families in schools, and then it builds up from there without feeling like we have to have national fights and mandates. I think we鈥檒l be much more successful moving from the smaller level to a larger agreement if we’re talking to each other in our communities and neighborhoods.

Diane Tavenner: Awesome. Maybe I’ll say a quick word on your provocation around humility in kids. I鈥檒l leave the coherence aside and just say two words: Swiss cheese in the existing system. There鈥檚 no coherence given the way it is. On humility, here鈥檚 what came to me: the habits of success and the building blocks pyramid we often reference. One of the top building blocks is curiosity. Underneath humility is curiosity. We can cultivate that because it feels impossible to lack humility if you are truly curious. What I see across our country, and it鈥檚 not just young people, is a lot of people who act like they know everything and are not curious about other people’s perspectives, lived experiences, or what knowledge they may or may not have. As a K12 educator, I believe curiosity is something you can cultivate.

There鈥檚 debate about whether you can teach it, but there鈥檚 a whole suite of skills around it that curate that approach and mindset. That is where, and I would put that under both of your buckets, core education and values, character education. Working with communities across the country, curiosity often comes up as a value they care deeply about in developing young people.

Michael Horn: Well, maybe as we transition out of this to our final segment of the show, I’ll just say you gave me a lot more faith. Thank you. That was a very helpful answer. The other thing that occurs to me, hearing both of your reflections about the declining trust and faith in institutions and that there鈥檚 humility in recognizing we don鈥檛 know the individual circumstances of every single community and family. As my co-author in “Choosing College,” Bob Mesta, likes to say when he does the jobs to be done research, you can鈥檛 imagine someone’s job to be done from a kitchen table. You have to go out and shoot the movie of them living to figure out what their circumstances are. There鈥檚 no way to create blanket statements or policy that covers all those unique circumstances. I appreciate y’all digging in on this.

Media Recommendations

Michael Horn: As we wrap up, I hope everyone’s enjoyed it as well. We get to return to the segment we know a lot of people enjoy and have even created tracking lists around. You don鈥檛 know this, Stacey, but our recommendations for books or things that we鈥檙e watching, reading, or listening to. We鈥檒l give Stacey a moment. Diane, why don鈥檛 you go first, then Stacey, and I’ll wrap us.

Diane Tavenner: I’m happy to go first. Some folks might not know that I actually lived in LA for about ten years a long time ago and lived in close proximity to the Academy Awards show every year. I used to be an avid follower but have sort of fallen off. This year my husband and I watched all ten Best Picture nominees for the 2024 awards from last year. I have been pleasantly surprised. What a spectacular lineup. There are the big banner movies like “Oppenheimer” and “Barbie,” but there are so many gems in that list. We had such an enjoyable time watching all of those films.

If you want a movie list, pick those ten and go through it. It鈥檚 hard to pick a favorite. I love “The Holdovers,” which provides commentary on schooling and education. I love “American Fiction,” and I really loved “Past Lives.” It鈥檚 such a beautiful, nuanced film that is incredible. It鈥檚 a reminder that I don鈥檛 think it would be made in America. It鈥檚 not a film we would make here. What a gift of a global community to share such a beautiful film.

Michael Horn: Very cool. Stacey?

Stacey Childress: Yes. I have not seen “Past Lives,” and I’m always a sucker for a movie about a school. So I also loved “The Holdovers.” I recently finished the book called “Hello, Beautiful.” It鈥檚 about four sisters in Chicago. I鈥檓 the oldest of four sisters, and the title comes from what their dad would say to them when he saw them: “Hello, beautiful.” It follows them from their late teens, early twenties into their early fifties. It鈥檚 wonderfully written and beautiful, but it鈥檚 also really hard. They are very close, but as they go on their life鈥檚 journeys, things happen, and sometimes people don鈥檛 live up to high standards. There are breaks in relationships, and then suddenly you鈥檙e in your early fifties looking back, wondering where all the time went and missing your family. It was not what I thought it was going to be, and I really loved it. So, “Hello, Beautiful.”

Last time you guys invited me on, I was so excited about the Astros. Then the season started, and the Yankees showed up in town and literally punched them in the face, swept them in four games, and they had a hard time recovering. They are off to their worst start since 1969 when I was four years old. I鈥檓 hanging in there with my guys, but it is really hard. It鈥檚 really hard.

Michael Horn: Well, you’ve had a run of success that most places would be envious of. We’re spoiled. I鈥檒l wrap us. I love all these. I thought, Diane, you had routinely watched all the Best Pictures, so this was a learning for me. I finally kicked back into overdrive and started reading a bunch of books. I鈥檒l pick out “The Three-Body Problem.” It sent me and a few others said I had to read it. Now it鈥檚 on Netflix as well. But I read the book first, and it definitely made me think. It made me ponder a bunch of scientific concepts, as good science fiction should. It also freaked me out a little bit. It hit all the points.

Diane Tavenner: Are you going for number two and three? Because that is a trilogy, Michael, my son鈥檚 favorite all-time trilogy.

Michael Horn: Is that right? We鈥檒l talk offline about how I鈥檓 thinking about it. We鈥檒l leave it there. Thank you for joining us on yet another epic episode. We鈥檒l see you all next time on Class Disrupted. Bye.

]]>
How Kids Learn: A Gap Between Schools鈥 Teaching Models & Latest Learning Science /article/podcast-expert-stacey-childress-talks-the-science-of-learning-importance-of-teaching-character-the-education-systems-9-key-roles/ Mon, 20 May 2024 18:01:00 +0000 /?post_type=article&p=727276 Class Disrupted is a bi-weekly education podcast featuring author Michael Horn and Summit Public Schools鈥 Diane Tavenner in conversation with educators, school leaders, students and other members of school communities as they investigate the challenges facing the education system amid this pandemic 鈥 and where we should go from here. Find every episode by bookmarking our Class Disrupted page or subscribing on , or .

Michael and Diane welcome back Stacey Childress, Senior Education Advisor at McKinsey & Co., for the first of a two-part series on the challenges facing K-12 education and strategies to address them. In this episode, they outline the nine roles/players of the public K鈥12 education system in the U.S. and the problems each is facing in 2024. They highlight the disconnect between current teaching models and the latest learning sciences, unravel the operational challenges schools face, stress the importance of intentionally teaching character and values, and more.

Listen to the episode below. A full transcript follows.

Diane Tavenner: Well, hey, Michael.

Michael Horn: Hey, Diane. How are you?

Diane Tavenner: I’m well. It feels like it’s been a minute since we’ve been together here, but I am excited about how we’re coming back together. We are so pleased to be welcoming back Stacey Childress to the podcast. What fun! Great to be here. We are getting the band back together again. For those of you who’ve been following along this season, the three of us spent two pretty extended episodes talking through the elements of higher education, the problems there, and potential solutions. We did that in response to a podcast by Mark Andreessen and Ben Horowitz.

We were all pleasantly surprised at how much great feedback we got from our listeners. They loved those episodes, enjoyed them, and wanted us to do a parallel experience for K-12. We couldn’t say no to that. So here we are again, and I’m looking forward to this conversation. The last one was quite rollicking, and I suspect this one might be fun as well.

Michael Horn: I’m glad, Stacey, that you chose to, against your better judgment, I’m sure, rejoin us for this conversation.

Stacey Childress: Listen, I’m thrilled to be here. I had such a great time with you guys last time. I heard some feedback from people I know and some people I didn’t know. Through LinkedIn, people sent me messages. That’s been happening in the last week, which is interesting. I’d love to do it again. I also just left that conversation feeling certainly challenged but also energized from the quality and dynamism of the discussion. So I look forward to doing it again.

Michael Horn: Well, we are glad you are back. Go ahead, Diane. 

Introducing the Two-Part Series and the Nine Roles of Education 

Diane Tavenner: Michael, I should just say, I guess I’m assuming that everyone knows Stacey, but let me do a quick introduction for those of you who may have missed those episodes and don’t know Stacey. Stacey is a good friend of ours and a good friend to education. She has a long, amazing history of being a teacher, a very popular professor at Harvard, and working at the Bill and Melinda Gates Foundation, NewSchools Venture Fund, and AirDef. I could go on and on about her credentials, but most importantly, she deeply cares about what happens for our young people in America and has always been at the center of what we can do to serve them better. We are super grateful for her rejoining us.

Michael Horn: Yes, indeed. With that, let’s frame the episode today and get into the meat of it. For those who remember the higher ed episodes, we did two responding to the Mark and Ben podcast about the challenges facing higher ed. We reacted to those challenges they identified in the first episode and their solutions in the second episode. For this one, because we are doing it from scratch ourselves, Diane has been willing and generous enough with her time to come up with the core functions of the K-12 system, and I’ll put it in air quotes. Right, it’s sort of tasked with providing in this country. Diane will go through her list of, I think, nine areas at the moment. Stacey and I might supplement a little, but then we’re going to dive into each one. Diane, you’ll tell us why you put that on the list and the problems or shortcomings right now. We will withhold solutions and thoughts about how we can make it better until the next episode. With that as prelude, Diane, dive in. Tell us, what are your nine areas? Just give us the overview, and then we’ll go from there.

Diane Tavenner: Great. Thanks to both of you for your comments, feedback, and help in organizing, because, as you know, the original list was very long, and we’ve done some grouping. There are nine. The first six are broadly related to the student experience and their actual education and learning. The next two are more about the function and role of schools in the community and the local environment. The final one is more about the role that K-12 schools play in America. I think it’s fair to say that we’re focused on public schools in this conversation. Obviously, there will be some overlap with private schools, but we’re here talking about public schools.

Just quickly, those first six include what we’re calling the core education, the role of teaching character or values to young people, the role of the school in terms of custodial care (Michael, we’ve talked about this several times on the podcast), and the security of those young people you’re charged with caring for. Number four, we’re labeling it a social services agency鈥攕chool as a social services agency. Five is policymaker. I think this one’s interesting to dig into in terms of the policies that schools and school systems make. Six is what we would call evaluator or recommender. We could start with six. There’s a big argument about what comes first, chicken or egg.

Nonetheless, those are our first six. In terms of the local community role, the first is that schools and school districts are, in many ways, local government agencies. That’s a very important role they’re playing. They are also a community hub. Those are seven and eight for us. Finally, we’re calling it social reformer in this national role. But I’ll be curious as we get into it. I think we might come up with a different name as we talk about it.

So those are the nine that we’ve landed on for today.

Michael Horn: It’s a good list of nine. I’m not sure I would add much to it. Stacey, how do you think about that list before we dive into each one?

Stacey Childress: Yeah, I think it’s a good list. I can’t think of things that aren’t contained in those categories. I’m excited to dive in.

Michael Horn: Let’s do it. Diane, why don’t you take us through that first one, which is core education? Talk to us about what’s in this grouping, what’s maybe not in this grouping if that’s relevant. Then let’s start to go deep into the problems before Stacey and I react.

Core Education

Diane Tavenner: Great. I think when people think of schools in the most traditional sense, they think of the three R’s: reading, writing, and arithmetic. This starts there and then grows a little bit. Obviously, over time, it has grown, but it is what most people think of as the most core function of a public school: to teach kids academic skills and knowledge, including reading, writing, and arithmetic. Of course, we’ve expanded to history, science, second languages, and I couldn’t even begin to list all of the elective and interest courses that have come into schools. But there’s still that core set of knowledge that is generally tested, assessed, and common across schools.

Then there’s also how that is done. Schools are places where lots of people come to learn together. This is not individual tutoring. So, how are you part of a community, a group, a classroom? What do those skills look like? A big part of schools has become extracurricular activities and interests鈥攁ll of the activity that happens in schools for young people. Regarding core education, which is a little more about how we do it, we have a very significant and robust special education component to our system. This is driven by federal legislation providing supports, resources, and accommodations for young people who qualify for having a learning disability and therefore an individual learning plan. That is a significant part of what happens in the core program now, in terms of resources, people, focus, etc. So that’s what’s in this bucket.

I started listing problems, and when I was at the micro level, I was getting into hundreds of them. So, I rolled it up to one big problem from my perspective. Thank you both for laughing at me. I would argue that the core education model in America, in the vast majority of schools, is just not aligned with the current science of learning. I would say on two fronts: what we teach and what we prefer to teach, and very much how we teach it and how we expect people to learn. As I went through my laundry list of all the things that were wrong, every time I thought about what was wrong, it was because we’re not following the science. You can take this all the way down to the youngest kids. As the country is waking up to, we have not been using the science of how kids learn to read. We haven’t been doing that in most of our schools. It’s everything from that all the way up to something we are all very passionate about: how you actually personalize learning as young people get older, enable them to self-direct their learning, drive their learning, build those skills around it, and everything in between. I’ll stop there, but that’s my macro problem.

Michael Horn: Stacey?

Stacey Childress: Yeah, I definitely agree, Diane, with that as a way of thinking about an umbrella category for lots of things that we might list in more detail. Alongside that, maybe not always the choices that folks are making in the system and in schools within the system about the academic program and the social aspect of schooling and all the other things you mentioned. There’s not always agreement at the community level or, if you think not quite that broadly, at the family level. What’s our overarching idea as a community or bundle of ideas that school is for? How do we ensure that what we’re doing every day for twelve years for young people, from kiddos all the way through late teens, is driving towards some common vision of what it means to leave our system ready to do whatever’s next?

Sometimes there’s either ambiguity around that or, where there’s more specificity, tensions and disagreements about the end goal. This can filter back through, especially at the high school level, but it can go all the way back through what frame within which we are making choices as a community and a group of professional educators about academic programs, how we’re approaching the social learning aspect of school, how much emphasis and what’s the mix of interest in extracurricular activities, and how these tie back with a longer-term view of purposes, skills, and mindsets that kids might leave their experience with. I think that ambiguity or lack of coalescence around purposes makes it hard to balance all those things, Diane, on your list, all of which are absolutely functions of school within its core education mission.

Michael Horn: Yeah, it’s interesting to hear you say that, Stacey, because my head went one way when Diane was giving the list. I was noting that as you look through the extracurricular or non-core classes in American schooling over the 1900s, it was just an ever-expanding list of classes. The proverbial grocery store analogies were so prominent in “A Nation at Risk,” of course, in 1983. At some point, it became, well, actually the definition of school is how much you are learning, which shifts much more to how we teach and learn, as Diane referenced. I would argue that schools continue to expand in scope along the other eight dimensions you listed, Diane, which we’ll get into later on.

Another point within core education is that special education has continued to expand in terms of resources and identifying students who need special education. Diane, you spoke passionately and persuasively last season about how our incentives in special education are not around innovation, efficiency, and delivering, but around more resources and a lot of box-checking.

I reflect on that expansion theme. Stacey, when you jumped in, I loved where you went with the purpose conversation. What’s the purpose of this education? As you both know from my most recent book, my big argument is that communities need to have that conversation almost tabula rasa. What are we trying to go for here? They don’t. Instead, they just accept the four math, four social studies, three or four science, whatever it is, and just accept these structures that have been handed down without getting behind the intent.

So many of the food fights, even within the camps trying to find their way through what the science teaches us about how and what we learn, are because we are guilty of not having an “and” conversation. We’re too often having an “or” conversation, talking past each other in some of these rooms, and missing the changes we could make if we started with Stacey’s conversation around what we are driving toward and why. Those are my three reflections from this list. At the end of the day, it means we’re teaching a bunch of things that don’t have a lot of coherence. We haven’t given a lot of thought to why we’ve privileged this branch of math over another one, and we’re not following all the lessons from the science of learning. We’re not incorporating them or at least trying them out with different populations to learn what works and why.

Diane Tavenner: Yep. We’re off to a rough start, friends, because that’s the thing we’re supposed to be good at. Oh, all right.

Michael Horn: Well, then tell us your second one. Maybe we’ll surprise you.

Teaching Values and Character

Diane Tavenner: Okay, here we go. This one we’ve labeled as the teaching of values and character. I almost hesitate to say those words, but I do think some of this conversation is designed to provoke a little bit. Those are provocative words in our country, as we know. It’s confusing to me why because young people are in schools for a good amount of time, as you said, for twelve or thirteen years and for significant parts of their days. It seems logical to me that a school should help them figure out basic norms of being a person and being in a community beyond just the learning side. How are you preparing to be an adult and a participating member of our democracy? When public education was conceptualized, these were huge aims of what we were trying to do.

We could go back in history and talk about some of the ill intentions, such as forcing certain groups of people to adapt to other norms. But at a macro level, just the idea of being a citizen of our community, our country, and our nation, and how you actually do that and become an adult, it seems logical that the school would play a role in partnering with families to help that come about. There are very significant challenges here. I’ve expanded to two this time, but they’re still broad. The first one, for people who’ve been listening, will not be a surprise: I think it’s the college-for-all push. In recent history, we’ve gotten away from preparing people for careers, employment, and life outside of school. We’re so focused on preparing them for the next educational institution that we’ve lost focus on that front.

Michael Horn: We’re all going to generalize.

Diane Tavenner: Systematically, right? So, I think that’s problem number one. The second one is the obvious one in our current society: whose values and whose role is it to teach these things? These are not small, little bickerings; these are big societal questions, and schools are caught in the middle of them. School systems, using the fight, flight, or freeze analogy, do one of the three. Some are duking it out, some are running away as far as possible, only teaching the three R’s, and some are frozen, not knowing what to do. There you have it, category two.

Michael Horn: Stacey, you get to go first again.

Diane Tavenner: Great.

Stacey Childress: I love that fight, flight, or freeze analogy in this context. You’re right, Diane. Going back to something we talked about in the higher ed episodes, the original podcast we responded to called this “moral instruction.” We weren’t crazy about that phrase. The podcasters had a particular point of view about it that we didn’t entirely share. I’ll go back to part of our discussion there. I grew up in a very religious and politically conservative part of the country and moved back here. I went to high school about 13 miles from where I’m sitting today. These issues are still fraught with challenge.

Part of what I think about this is, I get why it’s hard. It’s hard because it’s very important, and it’s hard because of the multiplicity of points of view about which values and whose values. Schools are in the context of our larger political and cultural moment, which is very hard. We know it because we’re trying to work through it and bridge it in our own lives with people in our families, friends, and colleagues. Of course, it’s hard in schools. The flight or freeze option is not happening because, as I said about college, values are being transmitted, messaged, inculcated, shared, and massaged even if it’s not intentional. As you said, Diane, kiddos are in school from a few minutes after they wake up until right before, right as, or right after their parents get home from work. It’s impossible for your eight most active waking hours of the day to be values-neutral or values-free.

If you are fleeing or freezing, what you’re opting into is almost anything goes until somebody is mad about it. Individual educators and administrators are making almost individual choices about which values they’re bringing to bear and which norms they’ll prioritize or not in their classrooms or cohorts of students. That’s a recipe for more tension and more upset because there’s not an overarching perspective. There’s not an overarching, even loose agreement about why we might be committed to ensuring that a set of values and some character attributes are prioritized in our experience. This while allowing for plenty of different perspectives and points of view across families, religious traditions, countries of origin, and other factors. Fighting over hot-button cultural issues or freezing or fleeing because it’s hard and you don’t want to upset anybody is missing the boat both at the micro and macro education levels. 

Acting as if it’s not the role of schools and educators to provide some underpinning of values, character, and moral reasoning is misguided. You need to filter it through age appropriateness, but we need to be more intentional about it, not less. Lean into it with intentionality and good intentions rather than trying not to offend anybody, which usually offends more people than being intentional about what you’re doing.

Michael Horn: It’s interesting to hear you say that, Stacey, because you mentioned age appropriateness. The last time we were recording, you said moral instruction was one of Ben’s lists. The thought I had at that time, which has been borne out based on recent events, is that college is too late to build in a lot of these things we want to see students do鈥攈aving civil conversations across disagreements and recognizing disagreement as a strength rather than a threat. Obviously, there’s age appropriateness regarding not introducing content that is inappropriate for, say, a six- or seven-year-old. But I think building these character skills, these habits, what I think of as fundamental democratic values, is incredibly important. And to your word, intentionality鈥攙ery intentionally. This was the purpose of the public school system. This is why we got public dollars.

Stacey Childress: That’s right.

Michael Horn: To do this enterprise above anything else鈥攑reparing for careers or anything. With all the caveats that Diane alluded to, where it was misapplied and certain groups were discriminated against, the purpose was to knit us into something larger. The debate now is often, should we or shouldn’t we, not acknowledging that we are. And then it’s this weird pose, like the right being, “Character matters,” and the left, for a period of time, was like, “I don’t know about that.” Now, it’s the opposite: actually, it’s important, and here are the values we think. 

And the right saying, “Wait a second.” It’s a weird conversation against a backdrop where I’m going to get the number wrong, but 80% of the population largely has a common set of answers for what these values are. That’s what is so frustrating. It goes to your first point when we were talking about the core program. If individual school communities came together and said, “What’s our purpose? Where’s the agreement that we can all get behind?” My wife and I were having a conversation recently, and she said, “Isn’t that great?” Or I can’t remember it exactly. I said, “I don’t know if they should be doing this.” She said, “Good point. We ask educators to do a ton of stuff for society that probably overstretches them.”

I don’t know if it was in reference to the bad therapy book by Abigail Schreier or what. The point, which I learned deeply from you, Diane, is that a lot of these things can be done in the context of academics rather than a special carve-out lesson that’s going to offend some group. My fifth-grade graduation speech comes to mind. I remember talking about learning the value of fair play, respecting your classmates, in just the lessons themselves. David had three apples, and I took two. That sort of stuff communicates a lot of this. We pull these things apart in strange ways that provoke fights. As I’ve learned from Diane, you actually learn it better when it’s all knit together rather than atomized. One other quick point, Diane, before you react: you also mentioned the notion of college for all distorting a lot of this, which I completely agree with. It looks like Stacey’s going to jump in after this. What’s interesting is that I think preparing people for careers, life, etc., outside of school is spot on. That’s also a controversial statement.

Many would say it can’t be about those material interests or shouldn’t be about whatever else it should be about. I’m not sure what they think college’s purpose is. They would say it’s about something larger, and college represents it. In the backdrop we are in right now, that seems absolutely crazy to me. 

Stacey Childress: Yeah. Diane, Michael, I’m glad you flagged that because, Diane, I was glad you named this value in the system that many of us had been working on for a couple of decades鈥攖he college for all value and the expectations we were trying to build in for students to see themselves as capable and worthy of being on a path to college. The Ed reformers from 1995 to 2015 had college for all as a driving purpose. I always try to be cautious about this and say it wasn’t in a vacuum.

It was in the context of very real national data that showed up in medium and small ways at the state, local district, and school levels, where you had significant gaps in outcomes. If you traced them back, you could see why those outcomes were so different because we developed a great way of sorting kids pretty early, before they were preteens.

Michael Horn: Yeah. Deeply disturbing ways, right?

Stacey Childress: Deeply disturbing ways. You’re either on the path to college, which only a small percentage of you are headed towards, and the rest of you, well, we’ll do other things for you. Much of policy in general and different sorts of social issues and reform efforts end up being these pendulum swings. To counteract that undesirable state we were in 30 years ago, we ended up narrowing our focus. We’ve got to get everybody to college or at least ensure everybody could go to college. It’s hard to do all the things on our top six things that we’re going to talk through. We’re only on the second one. It’s hard to do all of them, so we focused on a few things. Let’s do reading and math to ensure our kids are ready to take important tests that will make or break this college-for-all path.

When it comes to character or whatever other words we use, it’s in service of good grades and doing well on tests鈥攖he persistence, grit needed to get to and persist in college. I don’t mean to suggest those things are bad, but because we narrowly focused and hyper-engineered an accountability system around it, we ended up in a place where a broader notion of what it means to be a successful human, a young adult who has what they need to choose a path and navigate it effectively, got chipped away. So the three of us and a lot of other great folks we’ve been on this journey with have been pushing in a different direction or an adapted direction. It does have values embedded in it. That’s why I was glad you put it here. Those values affect young people, families, and educators. I talked too much on the last podcast, so I won’t do it again.

Custodial Care

Diane Tavenner: No, it’s a robust conversation, and I think we are too ambitious when we begin, but I will encourage us to pick up the pace here on these next ones. Those are two big ones, and probably the rest are as well, but maybe we might not be as passionate about them. Let me go to number three. I’ll start with the problem here. No passion here, conflict with the first two elements in many ways. This third one is the role that the school system plays in providing custodial care. If we’re going to be provocative like Ben and Mark, we’d say babysitting. With that comes the obligations around protecting the security and safety of young people. 

That’s two levels at least now: their physical safety and emotional, actually three, as well as their data and privacy. This is as big in the virtual world as it is in the physical world in many ways. The biggest problem here is that people who work in schools, for the most part, don’t want to do this job. They don’t conceptualize it as their job. They don’t like it, and they don’t do it terribly well, probably because they don’t like it and don’t want to do it. Most school people think of themselves as academic teachers, learners, not babysitters or security guards.

I think that’s one of the biggest problems. The conflict is that families want and expect this. It’s also not done well because the people doing it don’t want to do it. I’ll stop there.

Stacey Childress: Yeah. You want me to go? You want to stay in our order?

Diane Tavenner: Michael?

Stacey Childress: I would say a couple of things about this. I don’t have children in our public schools. I see all these videos now. I’m not on social media often, but when I am, I see these videos. If I went by that, I would assume not just our high schools but especially our high schools are in chaos with physical safety concerns. Thinking about the physical safety of kids from each other, and sometimes from teachers, and teachers from students. I don’t know how widespread that actually is. I have educators in my family. They teach younger ones, and I do not hear these stories about their schools.

But I see these videos, so there is a sense in the popular consciousness that at least our high schools are out of control. Part of the contributing factor, maybe the biggest driver, is discipline policies. I know we’ll talk about policy later, but the approach schools have been taking to ensure good community order in the building has changed over the last decade to think more about restorative practices and ways of building community through tough moments rather than just a punishment philosophy. There鈥檚 this tension playing out and who knows where it’s headed. It’s not only physical safety from outside in, but physical safety from kids, kids from each other. What it makes me think about is school shootings. You know that some young people in my family were high school students in a school shooting in our hometown back in 2018. There’s so much to talk about there, which we’re not going to, but the idea that kids are a danger to each other.

In my niece’s situation, the shooter was a student, an 11th grader that people had known since third or fourth grade. It wasn’t an outside threat. That shifted the culture of the community and the school, with kids as dangers to each other. The stakes and incentives that creates around safety result in an enormous amount of community time, attention, emotion, and real dollars. The dollars have to come from somewhere, so they come from something else, probably those things we were already talking about, academics, values, etc. The interplay between physical safety and what we have to do to signal to the community that we’re providing safety and what it turns our view of young people into, and therefore, how that affects the culture of the school, is a uniquely American problem right now, and a real one, certainly for the concrete reason of physical safety but also this cultural notion of how we think about our schools and young people. We used to have fire drills when we were kids, and now active shooter drills start as early as they can.

So there’s a real issue here. I’ve already spent too much time on it, but it’s a real challenge that our professional educators are facing day in and day out in their communities.

Michael Horn: I’ll try to be brief, but just pulling from that, I’m having a d茅j脿 vu moment because it occurs to me the three of us were at an elevator in a hotel about a year ago having this very conversation, and it spurred Diane and me to have a podcast on the issue you just talked about, Stacey.

Stacey Childress: Yes, folks should go back and listen to that. It was very good.

Michael Horn: So, with that acknowledgment, the couple of things I would say are, one, the tension in this one seems ironic at this moment in our society’s history, between the childcare piece, not having adequate hours or time and availability for the working families of today, and on the other end, chronic absenteeism being the highest it’s ever been that I can remember. Those are two things in direct tension with each other. It connects to a couple of things here, which is, it connects to the safety and discipline piece of this. It connects to the formation of character in the second one. It connects to the relevance of the curriculum in the first one, and whether people have passion for this and see a place for it in their lives. That all connects to mental health, which then connects to the shootings.

So these three actually connect in interesting ways. The last piece is this is yet another place where we fight a lot on the edges with each other. One of the fights is the restorative justice, don’t discipline versus the zero tolerance policy. A lot of people pushing for restorative justice get lumped in with the restorative view, but that’s not quite what they’re saying. Like Dr. Becky or someone like that, they believe in consequences for actions and hard lines and limits. They just don’t believe in arbitrary ones that have nothing to do with what you just did. Again, there’s this third way through these poles that we keep missing. Maybe I’ll just leave it there.

Diane Tavenner: Yeah. It’s hard not to go to solutions, and it’s hard to do all of these in short periods.

Michael Horn: Sorry, I jumped.

Diane Tavenner: Right.

Michael Horn: Let’s get to the next one. Because it connects also to these.

Social Services Provider

Diane Tavenner: It does. It’s deeply connected because, quite frankly, a big element of schools’ purpose, or at least what they’re spending their time and resources on, is essentially as a social services agency. When we go through the responsibilities of most schools and districts, transportation鈥攎any school districts run full transportation fleets. Meals鈥攖hey are serving not just lunch anymore, but breakfast and oftentimes snacks. They’re providing full feeding of large numbers of people and some basic health elements.

So, they’re testing your eyesight, for lice, and dealing with all of the COVID-related issues. Schools literally turned into clinics. I’m not even going to talk about how I felt when California started encouraging every high school to have the ability to administer Narcan if there’s a drug overdose. What more, please? Schools have always played this role, but it’s more complex now. They have to connect families and children to other agencies that support them, especially during crises. Let’s not forget the role of schools as mandated reporters. It is incumbent upon schools and everyone in them to report if they suspect child abuse or neglect. Some schools now employ social workers, counselors, and school resource officers. So, they’re running huge systems that go well beyond just the classroom.

The most obvious challenge here is that these are operationally intensive endeavors. They require a whole set of skills and knowledge that are not necessarily aligned with everything we just talked about. Most people in schools don’t want to do these extra jobs. They feel extra, on the side, added on. When you treat jobs that way, without operational efficiency and excellence, they don’t get done well, which ends up being this whole spiral.

So, those are the big problems.

Stacey Childress: Yeah. I have nothing to add on this one. I agree completely with your explanation and identification of problems.

Michael Horn: Yeah, I’m in the same boat. I think this is maybe the best evidence of the expanding nature of what we have thrown on schools. Every social ill, it seems, we ask schools to solve. This is where we have thrown another one. I’m not sure they can completely get out of thinking about these things if they’re trying to accomplish the first three, which we can get into maybe in the second episode.

So, Diane, why don’t you march on?

Policymaker

Diane Tavenner: Great. A lot of tension there. Number five shifts us to what we’re calling policymaker. I think later, I’m going to offer a local government agency. Some people might say, what’s the difference between the two? Aren’t those the same? Let me make the case for why I have separated them here. When people talk about government, they spend a lot of time thinking about the federal government, less time thinking about their state government, and even less time thinking about county government. We’re talking about people in school buildings and on school boards who are literally making policy decisions regularly that have the biggest impact on the lives of children and families. Everything from grading policies, discipline and behavior policies, and health and safety policies. All of those decisions during COVID were made at local school and school district levels, generally with guidance from the federal and state governments.

One of the challenges we had was that they didn’t actually tell us what to do. They gave us guidance, and then we had to decide what to do, which basically meant they told us what to do but gave us no cover for doing it. Local people have a lot of power to create policies that impact families. For example, when schools and districts decide to have professional development during the workday, parents have to pick their kids up at noon or whatever schedule. To your point about not being family-friendly in terms of care and things like that.

The problem here is that under any circumstance, good policy is hard to write. I would challenge anyone who has never written a policy to try to do it and see how hard it is. We have about 130,000 schools and almost 14,000 districts. We do not have people who are well-resourced experts capable of writing the best policies under hard circumstances. Instead, you get whatever people think sounds good, and the implications are extreme.

Stacey Childress: Yeah, totally agree with that. The policymaker, the local school district, plus any school-based policies are the biggest policy influence on the day-to-day life of families. It dictates what time people get up in the morning because whatever time school starts, you have to count backwards from that. Wake-up time is dictated by the school schedule and then on from there. We just make it very concrete and embedded in our lives. One of the things that was so hard about COVID, or a thing about COVID that was difficult for families, was just how central school policy was in their family clock and calendar. 

Diane Tavenner: Right.

Stacey Childress: When you go with what you said, Diane, I totally agree with just how hard it is to make good policy at any level. It’s hard, and we ask folks to鈥攚ell, it’s their job, it’s their responsibility as board members and educators鈥攖o make policies that touch every family with a school-age child in their community without a lot of support and knowledge building. It’s very complex, and we have it here. It could be elevated depending on how you want to structure a list.

It does flow through almost everything: grading, course schedule, graduation requirements, all the things.

Michael Horn: Yeah. I don’t know that I have much to add. It spills into transportation or transportation spills into it, and all these things just show how interdependent these are. What I’ll observe is that pulling them out and naming them, Diane, in this way is useful because we see all of the complexity and all of the possible areas for breakdown. As you said, people aren’t trained to do a lot of these roles, and yet they are core functions that they have been asked to play or defaulted into playing in many cases. With that, let’s go into your sixth, which I think is sort of an exclamation point for a bunch of these.

Evaluator

Diane Tavenner: Well, and it sort of rounds out the student experience grouping. I could have led with this one because then everything sort of falls from it. The role of the school district in K-12 is to evaluate young people鈥攖heir skills, their knowledge, their character, etc.鈥攁nd to recommend them for what comes next in their life. This is a profound role that the school and the people in it are playing in terms of the outcomes and lives of young people and their families. This is true in terms of determining the grades of kids, which we know makes a big difference. They confer the credential on them. They make recommendations to colleges and employers. The quality of their school signals to those other folks the type of education that the young person has received and the experience they’ve had.

Okay, there’s a problem with every one of those things. They assign grades, but this is discounted now because of grade inflation. They assign the high school credential, but that isn’t valued in our society anymore, so it’s pretty meaningless. They write recommendations for colleges, but those are undervalued, partly because it’s the same people having to write them over and over again with no time to do it and not a lot of resources. They all start to sound the same. In fact, a lot of people kind of copy and paste, and colleges know that. So those are undervalued. There’s this huge, giant role that they’re playing, but no one values them playing it. What I would argue is the most important鈥攁nd this is sad to me鈥攔ole that K-12 is playing, and this is primarily high schools, is the reputation they have. Colleges and universities have these perceptions about high schools, mostly aligned to the socioeconomic status of the student population, of how good those schools are. They factor that into their admissions decisions. There’s this giant, important role that all this time and energy goes to that I would argue is not actually being valued or used in meaningful ways. Big problem.

Michael Horn: Stacey, would you like to jump in?

Stacey Childress: Yeah, I totally agree with that. I think when we go to solutions in the next episode, we can get a little more detailed about how some of these components of this function play out and how we could do it differently. It’s interesting, Diane, this last one that you mentioned about school reputation being the signaler, especially to those applying to selective colleges. Then you tie that to the higher ed conversation we had on the last episodes. It’s a very small percentage of kids go to a selective college. Even in the college for all concept, it is a very small percentage of institutions, higher ed institutions that fall in that bucket. So then what about for everybody else? What’s happening here with this evaluator recommender function? It’s a weak signal.

Back to some of the other things we talked about, not very intentionally conceived and organized around outside of compliance. Transcripts have to get created and all that kind of stuff. Like, what’s. So what are the use cases for a credential and to what end? And how does that backward map to things we might do in the core education component and then the social component?

Michael Horn: So, yeah, that’s interesting. The compliance observation. When I was looking at this, I was struck by two things. One, Diane, question: Would you put the counseling function, the guidance counseling function here, would you put it in courses? Would you put it in social service agency, all three, because that’s something we know schools are tasked with doing. But do it. I mean, we know the ratios are like 400 something to one students, to guidance counselors. But it seems to fall into a bunch of these.

And so this is the one where I thought to mention it, because you have this signaler or helping shape, right, where students will go after in this one. And then I guess the other one that occurred to me was this last bullet that you had as well. I heard Raj Chetty speak recently, and I hadn’t focused on this before, but he put the slide up of schools that disproportionately get their students into selective colleges. And I had just assumed. I live in Lexington, Massachusetts. I had just assumed Lexington high School, closer to where you live, Diane, Palo Alto High School. I just assumed that they would be on par, frankly, with the top private schools, and they’re not.

And I was struck by that statistic. It’s like, basically a title one Lexington high school sort of count for about the same andover. Whoa. Okay. Now, that counts for a lot. And so I thought that was just interesting against this backdrop then that you mention it. And it seems to me, obviously incredibly problematic because it’s completely decoupled, as we know, with the actual work that students are, in fact, doing. And the rate of, as Ryan Craig would call it, the distance traveled.

Right. We would call it growth, but of individual students and what that might signal about where or where not would be a good fit for them.

Diane Tavenner: On the positive front, I think this category is ripe for solutions, and there’s a big opportunity there. So I’m excited to talk about it when we get into the next episode. 

Local Government Agency 

Diane Tavenner: So that sort of rounds out the experience of the young people. Now I want to shift to two that are more about the local community and the role that schools play there. And so this first one is what we’re calling local government agency. And I just want to tick through the role that schools and districts play. So, number one, they generally have elected school boards. So we’ve got a full election that’s going on. And this seated board that holds public meetings and are beholden to all of those public meeting laws and rules and regulations and all that goes on there. I will just quickly say that many superintendents say that they spend literally half their time, this is the chief executive of a school district.

They will argue that they spend half their time managing their board and those meetings. So take that. The next thing that they do at schools and school districts, most of them can levy taxes, they can issue bonds. I mean, these are government agencies taxing the people. Maybe the most important role of the government in the US or the thing we take most seriously schools can do. They also are required for collecting an extraordinary amount of data and reporting it at the local, state and federal level. This goes on and on all year long. It keeps getting bigger and bigger every year.

They are, when we think of this, they are entrusted with significant dollars, state and federal dollars. I was talking to a state superintendent the other day, she, as the chief learning officer, the state superintendent of instruction controls half the state’s budget. And that is not abnormal. Most states are spending about half their budget on education. These are significant dollars that these boards and these people are entrusted to spending. Well, thoughtfully, etcetera. And then finally, they control huge amounts of the public land, you know, and it depends on the state and how that goes. But in some cases, they are even the people who perform the tasks of zoning and entitling land.

Diane Tavenner: This is the role that the city or the state is often playing for everyone else. But, you know, schools can get exemptions and do that themselves in a lot of cases and places. And so massive, massive governmental roles that schools and districts are playing. And as I thought about this one, I just, I think about my experience in schools and how people who do things like this that involve a lot of money and a lot of land, I would argue, and I’m not going to give a value judgment here, but that is more valued by our society than educating people or providing care for children. Like, when we think about who do we think is more professional, who do we pay more, who do we get? You know, it’s the people on the side of the land and the money. So if you revere that a little bit more, where will your time and attention go in a system? But to that, in my experience, there’s very little connection between the six things we just talked about.

And this part of the house, and there’s very few people who work on it in K12. And I contrast that to our conversation about higher ed, where one of the critiques was, we’re starting to see like a one for one, an administrator for every student, not so in K12 at all. So you have far fewer people with different areas of expertise kind of disconnected from the mission and the purpose doing all of these functions. That’s a big problem in my mind.

Stacey Childress: Yeah. I don’t have data in front of me, but I want to push a little bit on that last point you made, Diane. I think this is where a broad brush might smooth out a lot of variability. So what you described, with far fewer people charged with managing, governing, asset, revenue generating, and liability functions, with far more educators, where these fewer positions are paid a lot more. I think in midsize to small communities, that’s probably right. In medium to small size school systems around the country, it might break a little bit when you get to the largest school districts in the country. If you look at the 100 or 200 largest school districts in significant metro areas around the country or in those large counties in Florida and Maryland, there are a lot of administrators. You start to get ratios that are closer.

So if you look at the headcount allocation in large systems like that, classroom fair headcount FTEs as compared with non-classroom FTEs, you get closer to that one to one or sometimes even one plus to one. But your point is well taken. Depending on system size, it might look different in most places. What you said, I think, is exactly right. The other contrast I’ve made is I agree with the way you framed it. As educators, their value in terms of what we are willing to pay and the people who manage this stuff in the school district, that’s one comp. Another comp would be, some of these places, like the larger mid-size and the large ones, we’re talking billions of dollars of assets in terms of real estate, physical plant, cash debt, all of those things. You’re looking at 300 grand for somebody to be the head of one of these systems. That fits in the public sector. But start to think about the private sector. Somebody who’s got billions of dollars of assets under management that they are accountable for, then you put the extra, what should be accountability and transparency of it being my tax dollars and yours and yours and all of ours are actually kind of underpaid. Well, I will be underpaid in terms of the kind of judgment, leadership ability, ability to bring people along into some of these, public levees that we need to do and the kind of expertise at the general management level to even know what right questions to ask, of the financial people who are managing all these assets. I can see it both ways. Underpaying educators relative to administrators. Yeah, maybe underpaying some of these administrators relative to comparable jobs in the private sector, managing this level of resources and complexity. I don’t know. I could make that case, too.

Michael Horn: It’s interesting, Stacey. I was just thinking about AI as it comes in and perhaps maybe changes some of these dynamics. We want more human-facing roles, and some others can change because I had the same reaction as you did. I think of places like in New York City or Newark, where it’s like half the dollar doesn’t even reach the school. It gets stuck in central admin and what the heck is going on there? The second thing I had more as a problem because I think this is a good one to identify, Diane, is how many of these places, like the elections are off cycle. Voting is not very high, and yet you realize what a disproportionate impact.

Stacey Childress: Yes.

Michael Horn: These places play in our society and they’re kind of decoupled from the democracy. Sometimes we hear an argument, oh, I just wish you were out of politics. Well, guess what? When it’s public dollars from taxpayers, it’s part of politics. We can hate it, but it is. We’ve done a lot to sort of take it out of the politics, and I’m not sure that that’s been a good thing given to your point, the gravity and enormity of some of these decisions.

Diane Tavenner: Yeah. Just to close this one, I’ve spent a lot of time in school board meetings over my career, and I think it’s just so clear, the tension and a charge that I think is an impossible charge where you have, like this school board that is in the same meeting deciding, if an individual student is going to be expelled from a school and considering whether or not they should sell or buy a gigantic piece of land and whether or not they’re going to exempt themselves from zoning and then how to spend bazillions of dollars. There’s a problem with that. That’s what your regular school board looks like.

Stacey Childress: Absolutely. As you were kind of tying those two things, might want something. Michael was saying, what you just said, Diane. Oftentimes, school board election turnout is in the single digits. It can pop up above that in some smaller communities where there’s a lot of, but not much like it’s still a pretty low percentage of people in a given catchment area that are actually making these decisions about who is going to do all of these very critical functions indeed.

Community Hubs

Diane Tavenner: All right, number eight, staying with this community theme, schools are a hub of communities. They are a centerpiece of many, many communities. When you get to smaller communities and rural communities, they literally are the heart of the community in many cases. If we have seen this over time, when anyone tries to close a school, even in a large city, the response from the community is generally overwhelming in terms of trying to protect that school from closure. So community hub is a huge role, partly because oftentimes schools are a very significant employer, a regional employer in some cases, and a union employer. So this is a significant role they play. They also are a huge part of something that everyone cares about, which is traffic. The comings and goings and the traffic are always a big issue around schools. 

As we’ve talked about, a lot of things happen in schools and their buildings and their campuses, everything from they are the polls, polling places in most cases where democracy is where we do go to vote, they host a whole bunch of events for communities and become the place of that. So this community hub is a significant role they play. The problem I would point out here, in addition to what we’ve already talked about, which is just like mission creep and capability and all of those things, is oftentimes we talk about in schools that adult interests get put above those of students. I think you start to see it here, where a lot of this is much more about the people in the community and the adults who are working there than it is about the kids. Those interests will preempt those of young people on a variety of topics.

Stacey Childress: Yeah. Nothing to add there, Diane.

Michael Horn: Yeah. The only thing I would say is there’s a parallel to higher ed, right? Small colleges in danger of closing in many areas, many of these in rural areas. The argument you hear, we got to save them, is employment, not some deeper community value necessarily, which I think speaks to the dynamic. Not to say that employment isn’t a deep community value. It is in service of what, right? So I think that’s often a question.

Pathway to the American Dream

Diane Tavenner: All right, well, let me bring us home then with number nine. Now we’re going to zoom way out to schools and back to the beginning, Michael, of maybe the original purpose of them or some of the original purposes at the most inspirational level. Public schools are the way that Americans achieve the American dream. The idea is that every single American can go to school, a good public school, and have the opportunity to achieve whatever they want to achieve. There aren’t doors closed to them. Everything is possible. The American dream is possible because of our public education system. I think over the years, we’ve sort of layered onto that.

People have built on that and added onto that, you know, this is the place where we actually bring socioeconomic classes together in public schools. And this is where we mix as people and as a community. Stacey, you cited the reformers of the last 20-ish years, or we’re moving out of that era. We’re not sure what’s coming next, but kind of Clinton, Bush, Obama eras. Many people I know have often referred to public education as the civil rights issue of our time. So it is that significant and big that the aspiration and expectation of public education. I guess I would start, I would open the problem conversation here with the idea that I think we have a growing amount of evidence that the system that is public education today is actually producing results that are counter to those aspirations I just named. They might actually be doing harm rather than good. The system might be producing those results.

Certainly we can go into depth there, but I will just leave it there for the two of you.

Stacey Childress: Yeah. Yeah. I think this is a great one to spend a little time on next time. What we might do, what, if anything, we might do differently, going forward here. That civil rights issue of our time was very grand. It’s kind of a messianic evangelical plea, I think, with all good intentions. You’re trying to mobilize a broad coalition for improvement, change, transformation because many of us believed, lots of us believed, and I think still believe to some degree, that part of the promise of America is that if you work hard, play by the rules, get a good education, anything’s possible for you. There’s something deeply American about that notion. Even though we’ve got shifting ideas of what the American dream might be, I think the power of that as a concept is still quite salient. Even though it might be in transition to some updated definition, it’s still a very powerful mobilizer. Part of my stump speeches for years was a quote by Barbara Jordan, who said, “All Americans want, what Americans want from their country is just an America that lives up to its promise.”

Diane Tavenner: Yeah.

Stacey Childress: Which is small and enormous. Then I would say, part of that promise is a free, high-quality public education near you in your neighborhood. That was my kind of some of the animating instinct behind entrepreneurship for education. The ed reform crowd from, as you said, ’95 to about 2015, like, we all talked about it maybe in slightly different ways, but it was that chief animating function. Again, it’s kind of, as Michael said, back to the beginning of why we ended up with public schools that then became compulsory high schools that then was, like, kind of embedded in this notion. I think there’s some critique of this both on the left and the right politically these days. On the right, the grandiose, progressive project of improving everyone all the time is kind of suspect, and on the left, what is the American dream, anyway? Who gets to decide? Are these institutions so kind of rotten at their core from the beginning, in their design that, of course, they’re producing these inequities? It’s what they were designed to do in the first place.

I think there’s contested ground. But, you know, as we said, on some of these other things, I think there’s, I won’t call it the great middle or I just think most Americans would still agree. Let me say it even differently. I think most parents and caregivers who have children in schools from pre-K to 12th grade have some things they agree on about what our public schools are for. If kids are going to be in school for 12, 13, 14 years, depending on whether they start at three or four years old, kindergarten, there are some things about our country, about our society that we want kids to understand, feel great about, be challenged maybe by some of the tougher moments in our history, and want to work to make those things not true in the future. That there’s some role for our schools to still be that kind of aspirational meeting point, great leveler among different socioeconomic statuses, where in this country, you can still be anything you want to be if you show up, work hard, work with others, figure out where you want to go, and our schools should help you get there. I think there is an element of social reformer. I still can’t think of a better word for it. There is one.

I just can’t think of it. Like reformer sounds, again, it sounds so 1920s progressive, and we’re going to technocratically fix everything through our institutions, which I’m not a huge believer in that, on balance. But I still find something very inspiring about the underlying concept here. If almost every young, well, whether it’s private or public, everybody except the percentage of kids that are homeschooled, goes to school starting certainly no later than five or six years old, and they stay there until they’re 17 or 18, the things that are going on in those years during the daylight hours, autumn means something for who we are as a country and who we could be. So anyway, I’m starting to preach again, so. But it’s still, you know, I’m still very sappy about it.

Diane Tavenner: Yeah.

Michael Horn: Yeah. No reason to run from that, right? I think the only two observations I would have here are one, when I saw this on the list, Diane, I thought of the zip code, one that you mentioned that everyone should have a great option for them in their zip code. I guess I thought of something different, which is I thought of our broader trends in society around segregation. We know the history with racial segregation, of course, but the bigger segregation we live in with right now is not race. It’s one of ideology and political party, and that we, in fact, don’t live in districts where we mix with people who generally think differently from us. So we don’t have these conversations or are forced to compromise and live with each other at the Little League fields and in the schools, and sort of live up to what Stacey just was sketching. I guess that’s the second thing that I’ve been wondering about a lot, which is, you both echoed the rhetoric that we used to have of the civil rights issue of our time. I guess I’ve been thinking a lot about what’s the causality? Is it actually the opportunity, maybe above, that drives education to be in service of it, or is it the education that creates? I’m sure it’s a bit of both. But going back to your original observation, and I’ll end my thought here, Diane, is if we’re not teaching in line, like, if we’re not running an institution set to, you know, fundamentally around learning, we don’t have a great what you learn or how you learn it, maybe it isn’t actually driving the causality and the success in the American dream we’ve historically had. So I guess then that’s a difficult set of questions. Is it in service of, and that’s where we need to be asking our questions, or can it be different and actually drive this in a more positive direction going forward? That I think we all would hope because we all spend a lot of time on it, so.

Diane Tavenner: Well, that’s a good place to wrap today. Thank you both for wading through my list with me. And if folks have hung in with us this long for an extended episode, we appreciate you and hope you will come back for number two, where we’re actually going to talk about solutions that are both already beginning and that we see might be possible and opportunities. So thank you.

Michael Horn: We’ll leave it with that. Right. Thanks for joining us in Class Disrupted. We’ll see you next time.

]]>