Nate Fairchild had just instructed his eighth grade students to write a summary of a disturbing passage from Elie Wiesel’s Holocaust memoir Night when he dropped a surprise: He’d customized a chatbot to help them by masquerading as the Nobel Prize-winning writer and answering their questions. “Is that gonna get weird?” Fairchild asked, then answered his own question. “I don’t know, maybe! If it does get weird, let me know.”
If the students in his literature class found the prospect of chatting with a long-dead Holocaust survivor’s synthetic doppelgänger strange, they didn’t say so. They were accustomed to this. They’d been experimenting with artificial intelligence for months in Fairchild’s classroom in rural Evans, Colorado, using a product called MagicSchool, which is built on large language models from big companies such as OpenAI Inc. and Alphabet Inc.’s Google. They’d mostly turned to it for feedback on their writing or to summarize complex texts, but sometimes more offbeat exercises came up, like this one.
With the novelty having worn off, most students skipped the chatbot. But about a third took up Fairchild’s offer. “Hello, I am Elie Wiesel,” the chatbot began. The students’ questions tended toward the pragmatic: “how should I start my summary about Night by Elie Wiesel,” one boy wrote. The chatbot suggested a line: “‘Night’ by Elie Wiesel is a powerful memoir that recounts the author’s harrowing experiences during the Holocaust.”
The student (whose parents asked that he not be named) raised his hand. “Mister, what if I copy exactly what it says?” he asked. He would get zero points and a chance to redo the assignment, Fairchild said; better to paraphrase the chatbot if he agreed with its output. Nearby classmates who’d forgone the AI help were already well into writing their summaries. The boy stared at his screen. He toggled to Google and searched for some words. “‘Night’ by Elie Wiesel is an important state in time that recites the author’s experiences during the Holocaust,” he typed.
In 2022, when OpenAI launched ChatGPT and teenagers started gleefully outsourcing their homework to AI, the adults began to panic. Within five weeks, New York City Public Schools had restricted ChatGPT on school networks and devices, citing its potential negative effects on learning and worries about the safety and accuracy of its content. Others followed New York’s lead.
But as US school districts open for their third academic year since ChatGPT’s introduction, things have changed. Most of the biggest districts, including New York’s and the six next largest, are allowing and even encouraging some AI use—particularly by teachers, though increasingly by students as well. The Consortium for School Networking, a professional association, found this year that just 1% of surveyed districts were banning AI, while 30% said they were embracing it. (The rest said they weren’t sure, their plans weren’t defined, or they were using it only for certain things.) A little more than half said they were training educators to use AI for instruction.
For researchers, educators, parents and students themselves, there are plenty more unresolved questions: How might AI-generated factual errors warp students’ understanding of the world? How might AI-generated hand-holding stifle their own self-expression and problem-solving? How might AI-generated learning resources (including chatbot avatars) marginalize the human scholarship it’s based on? How might AI-generated grading perpetuate racial and gender biases? How might AI companies cash in on their access to student and teacher information? What might we all lose in giving up social learning processes for computational ones?
At a time when unprecedented political and financial constraints make it tough to enact proven systemic reforms, district officials are also betting that AI can take some pressure off administrators and teachers facing high burnout and attrition by helping them with tasks such as emailing parents and generating lesson plans. And they hope it can address declining US test scores and student engagement by customizing teaching material for each student’s needs and interests. Evidence is mixed on whether AI can help accomplish either goal, but in the meantime the financial math has been persuasive: Bringing AI into schools might cost a typical district from nothing to several million dollars a year, depending on the district’s size and the products used, compared with a much higher price tag for structural changes such as hiring more teachers.
Officials tend to describe their AI conversion as a natural result of research and soul-searching. They also acknowledge the influence of the companies selling the technology. OpenAI, Google and Microsoft Corp. have each developed education-centered versions of their main AI products—ChatGPT, Gemini and Copilot, respectively. (An OpenAI spokeswoman described the version of ChatGPT for educational institutions as being made for colleges and universities but noted the company has signaled an interest in schools too; Google and Microsoft already gear their offerings toward schools.) And those companies and a handful of well-funded startups have crafted a muscular campaign to embed their AI products in educational settings, including a push to get teachers like Fairchild to use them.
While the companies describe motivations that sound like those of district officials, the financial upside for Silicon Valley is undeniable. Grand View Research estimated last year that the global market for educational AI technologies would rise from $6 billion to $32 billion by the end of the decade. And the business potential goes well beyond that. When executives fantasize aloud about the AI superusers of the future, they’re implicitly referring to the generation still in school. Talking up AI’s educational possibilities also supports companies’ argument to policymakers that any potential societal harm from their products is offset by potential gains.
The companies say their support is meant to make sure AI enters schools in a way that benefits teachers and students. Officials at the affiliated groups, meanwhile, stress their independence, pointing out that they also have noncorporate supporters and at times oppose tech companies’ AI priorities, question their motives and critique their products. But the groups, like the companies, still tend to highlight the importance of preparing students for an AI-dominated future and the argument that risks can be mitigated by responsible policies and habits. And language to that effect has found its way into the AI plans of districts that lean on the groups for guidance.
The outreach from Silicon Valley seems to follow a playbook developed during a decades-long attempt to turn public education into a market for private products. That effort has seen some high-profile failures: for instance, massive open online courses, which were supposed to expand access through web-based classes, and personalized learning regimes, which were supposed to use software to perfectly match students with material at their level. It has also included the enormously successful diffusion of screen-based education platforms such as Google Classroom. One lesson learned along the way is that school technologies rarely catch on without buy-in from teachers.
Getting the products into classrooms and training educators to use them is an obvious first step. But Inioluwa Deborah Raji, a member of TeachAI’s advisory committee and an AI researcher at the University of California at Berkeley, told me she worries about a lack of “critical skepticism” of AI in schools—including from the consortium she advises—given the dearth of information about whether and how it works. “It’s like putting a car on the road without really knowing how far it can go or how it brakes,” she said. “It becomes weird to see it so widely adopted without that due diligence.”
If anyone grasps the awkwardness of the dance among the private, public and nonprofit sectors in education, it’s Elisha Roberts. In 2024, Roberts had recently been hired at the nonprofit Colorado Education Initiative (CEI) when she was asked to help manage a program run by the organization in partnership with AIEdu, MagicSchool and others. The program would bring AI education into schools using $3 million disbursed through a state grant program funded largely by federal pandemic relief.
A Denver native and former principal, Roberts was nevertheless in some ways a strange choice for the gig. She’d majored in politics at Occidental College in Los Angeles, where she’d been profoundly taken with the seminal Brazilian educator and philosopher Paulo Freire’s notion that the purpose of education is liberation: Students develop critical consciousness, recognize oppressive social conditions and work to change them.
Freire’s ideas were especially resonant for Roberts because of her identity as a Black, queer woman. She’d grown up wanting to join the US Supreme Court, but in college she decided to be a teacher. She studied in Botswana, taught in Japan and earned a master’s degree in education at Boston College, then returned to Denver and became the principal of a charter school that she tried to infuse with Freirian principles. Roberts had only recently left that position when the chance to join CEI, as an assistant director focused on partnerships, came up.
She tended to be cautious about AI products. “We don’t know enough about the long-term impacts to actually be introducing it to kids,” she told me. But she’d seen teacher burnout and student disengagement firsthand and was open to the argument that AI could help. “This isn’t about me and my feelings,” she said. “This is about the tools and how they can support teachers.”
Long before his Elie Wiesel experiment in Evans, Nate Fairchild had gotten curious about AI; when he learned about the fellowship program, he applied and got in. He established a goal for himself: to see if MagicSchool could help both students who struggled with learning and those who were doing well and needed a challenge. He found the training from MagicSchool limited, focused largely on reviewing product features. “If you want to hear all the ways in which AI is this amazing, revolutionary tool that’s going to make everybody’s life better, the companies that are providing the trainings will tell you that all day,” Fairchild said.
So he did some additional research and devised his own 90-minute introduction to AI for his students, problems and all, before opening up their MagicSchool access. He was soon seeing promising results. Those who struggled with reading comprehension could get MagicSchool to explain texts. Those working above grade level could engage with the chatbot about complexities that peers might miss. Students at all levels liked getting writing feedback. One of them, Aesha Garcia-Guerra, told me, “AI gives me a chance to see the mistake before I turn it in, so I don’t miss a point.”
But challenges emerged before long. During my visits to Fairchild’s classroom, each time he introduced an assignment for which kids could use AI, he reminded them to critically assess the outputs. I saw Garcia-Guerra do this on occasion—one time she caught the chatbot citing the wrong chapter from a reading she’d done. Otherwise I rarely observed the students checking its outputs. A couple of times, I saw errors or biases go apparently unnoticed, and in class debates some cited comments from “the AI” as evidence supporting their claims.
The chatbot’s impersonations of historical figures seemed especially fraught. Once, when asked to pretend to be John Brown—an abolitionist who murdered five supporters of slavery and characterized his actions as righteous—the chatbot insisted, as Brown, that violence is never the answer. (This might have been because of safety guardrails keeping the chatbot from generating violent rhetoric.)
Incidents like these bugged Fairchild, but he viewed them as learning opportunities; he’d hoped all along that provocations like the Elie Wiesel chatbot would compel students to critique AI themselves. When kids turned in assignments that seemed too reliant on MagicSchool, he made sure to flag the issue, giving them partial credit and a chance to redo the work. He was also planning to start using inaccuracies, biases and problematic impersonations to initiate more pointed conversations about the shortcomings in real time.
At the same time, Fairchild helped with some professional development sessions for colleagues on how to use MagicSchool and made himself available to company representatives, though he remembers being individually asked for advice only once. He said his independent research and his 15-plus years of teaching experience helped him critically evaluate MagicSchool’s products without getting caught up in its commercial interests, though he wondered if less experienced colleagues would have a harder time.
Adeel Khan, the chief executive officer of MagicSchool, told me that when his startup joined the program to bring AI education to Colorado schools, it had barely started attracting users; he said he’d been “excited to spread that all through Colorado.” The company has since grown to be one of the most successful US startups selling AI to schools.
Its classroom chatbot, like others, relies heavily on the effectiveness of the large language models that underlie it. When I mentioned to Khan that MagicSchool’s ability to mitigate problems with accuracy and bias might be limited as a result, he said, “You’re right. There’s not that much we can do to mitigate it.” He added that the company puts prominent disclaimers inside its products noting that they might produce inaccurate or biased language and that users should double-check for this. “Every time a student signs in, it’s like, hey, this is how you use it responsibly,” he said. “We think it’s our responsibility to educate people.”
Tony Wan, head of platform at MagicSchool investor Reach Capital, explained to me that AI education companies benefit from teachers and students flagging inappropriate content and otherwise helping guide product development. To that end, he said, “we often encourage our founders to just get this in the hands of teachers and users as quickly as possible—not necessarily as a refined product. And I don’t mean that in a bad or irresponsible way.” Wan later clarified that this “should not come at the expense of quality or pose risks.”
During the 2022-23 school year, around the time New York City Public Schools blocked ChatGPT access in schools, someone with the district contacted Microsoft looking for advice. “They said, ‘We need you to come here immediately,’” Deirdre Quarnstrom, Microsoft’s vice president for education, recalled in an interview. Company representatives traveled to New York and gave what Quarnstrom described as a 101-style introduction to LLMs. That spring, then-Chancellor David Banks wrote in an op-ed that the district would “encourage and support our educators and students” in exploring AI. The district now lets teachers use Copilot and Gemini, with ChatGPT available by request (though students still can’t use the products).
The next three largest US school districts ramped up their AI investment as well. Los Angeles Unified enabled Gemini for all employees, with a plan to open access for students in 6th through 12th grade in 2025-26. Miami-Dade County Public Schools trained educators on Gemini and began rolling it out to high schoolers in 2024-25. Chicago Public Schools started testing Google’s and Microsoft’s products with teachers and was considering opening student access too.
Despite all the investment, by the 2024-25 school year, teachers themselves weren’t embracing the technology to the same extent as education officials. A Gallup and Walton Family Foundation poll found that while 60% of teachers had used AI during the school year, they weren’t turning to it much. More than half of the respondents said they’d spent three hours or less learning about, researching or exploring AI tools, compared with about a quarter who’d spent at least 10 hours. Three-fifths said they never ask students to use AI. Teachers were also far more likely to believe weekly student use of AI would decrease, rather than increase, writing skills, creativity, critical thinking and communication, among other abilities.
Besides working with outside groups on training, tech companies created their own training for educators, which could count toward the professional development that states and districts tend to require. A lot of these offerings were similar to what Fairchild received in Colorado, with an emphasis on product tips. They weren’t always on point. Google’s traininggave the example of a writing teacher sending an AI-generated email to students—urging them to practice writing, of all things. “Summer’s a blast, but don’t let your storytelling skills take a vacation!” it read. Training from OpenAI and Common Sense Media had ChatGPT create a multimedia presentation on the Mexican Revolution. “This image highlights significant figures and moments,” declared a text caption for a resulting picture, in which no one shown resembled any well-known revolutionaries. An OpenAI spokeswoman said that training would be “refined based on feedback.” Robbie Torney, the senior director of AI programs at Common Sense Media, said the organization agreed that the presentation was problematic and that it and other “outdated” material wouldn’t be included in future training. “That example perfectly illustrates why using AI-generated images is so tricky—they’re sophisticated fiction, not factual representations,” he said
These kinds of fictions were already showing up in classrooms by then. The Houston Independent School Districtrequired educators in some schools flagged as underperforming to use district-provided teaching materials generated partly with AI. One worksheet asked students to compare AI-generated imitations of Harlem Renaissance art in which the faces of Black-appearing characters were distorted, prompting a backlash among some community members. (A spokesman for the district said that the AI creations weren’t so different from real Harlem Renaissance art with abstract faces and that teachers could flag problematic AI-generated material.)
As the 2024-25 school year came to an end, districts were putting AI to a broadening range of uses: “coaching” teachers, detecting guns, chatting with students about their mental health. Then proponents of AI in schools got a boost from the highest level of the US government. In April, President Donald Trump issued an executive order calling for bringing AI education to kindergarten through 12th grade—including using it to design instructional resources and tutor students—with the help of partners in the private sector. “We must provide our Nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,” the order read.
If one vision for education is to mold children into capable users and developers of AI products in partnership with private enterprise, a version of it can already be found in an office-like building in downtown Austin. Last winter at the high school campus of Austin’s Alpha School, a well-produced promotional video featuring co-founder MacKenzie Price played on a screen mounted on the foyer wall. Another screen showed an X feed cycling through posts from public figures with some thematic link to the school, including Jeff Bezos, the YouTuber MrBeast and Grace Price, the co-founder’s niece and a recent Alpha graduate specializing in Robert F. Kennedy Jr.-aligned health-care activism.
Alpha School is a private K-12 institution that first opened in Texas, under a business called Legacy of Education Inc., and has annual tuition starting at $40,000 on most of its campuses. It’s built on the idea that, by personalizing education with AI products, schools can squeeze six hours’ worth of learning at a typical campus into about two hours, freeing up students to spend more time on life skills and individual pursuits; to accomplish this, the school has relied partly on educational products from a company called 2HR Learning Inc., also co-founded by Price. “It’s allowing their learning experience to be so much more efficient and effective,” Price told me. AI is so central to Alpha, she said, that even its kindergarteners are exposed to it, receiving AI-generated tips on improving their speech patterns: how many words they speak per minute, their use of filler words.
“I feel like there’s just so much potential in the kids, and this type of role just really allows you to unlock that potential and be a mentor,” said Carson McCann, the guide with the HR background. In that capacity, he added, “I don’t do academics really at all.” The student nearest him was studying calculus. “I’ll be honest, I haven’t touched calc in seven years,” McCann said. (He ended up leaving the school at the end of the year; he couldn’t be reached for comment about his departure, but his LinkedIn page shows he founded a consulting firm.)
If the high schoolers needed help, they could use an on-screen AI chatbot or get AI-generated writing tips (though Price later told me the school stopped offering the chatbot because of cheating concerns). The kids also had remote access to human tutors.
Students were free to use AI products from outside providers as well. Price talked about a student habit of using ChatGPT and other chatbots to convert long texts into factoids on digital flashcards, then memorizing those instead of doing the reading. As long as a student did well enough on the school’s on-screen knowledge assessments, Price said, she applauded the shortcut. Showing through quizzes that they’d mastered concepts earned the students XP—experience points, like in video games—which they could convert into dollars for investment into personal projects called “masterpieces.”
Price expressed pride in the students’ masterpieces: a business selling talking stuffed animals that would give AI-generated mental health advice to teens, for example, and one offering AI-generated flirting tips. Masterpieces without potential commercial applications were rarer, though Price told me about a girl who’d used AI to compose a musical.
Work on the masterpieces took place in the afternoon, beginning with students attending to their BrainLift, a document containing their project notes. Each BrainLift included a list of the contrarian beliefs that made the student’s masterpiece special, along with evidence to support those beliefs. The students then fed their BrainLifts to a 2HR Learning platform whose built-in AI chatbot could accommodate the contrarian beliefs they’d described.
When I sat down with the head of the high school, Chris Locke, he told me the school had a name for the contrarian beliefs it encourages in students: “spiky points of view.” For example, “one of Alpha’s biggest spiky points of view is that you don’t need a teacher,” he said. Chloe Belvin, the guide who’d previously worked as a corporate lawyer, chimed in: “It’s funny, because in a traditional school you get in trouble if you’re using AI, and here you get in trouble if you’re not.” She added, “The starting point of every conversation I have with a kid is, ‘Is there an AI that can do this, so that you’re not spending your time on it?’”
The financial arrangements among these entities is unclear, but the filings suggest that Alpha has been serving as a sort of in-house distribution channel for a corporation developing AI products for schools. Trilogy also submitted the initial trademark applications for 2HR Learning and several education products before assigning those rights to 2HR Learning itself. And positions at both Alpha and 2HR Learning were recently posted on Trilogy’s corporate LinkedIn page.
When I asked Price in late July about the relationships among the companies, she didn’t address the specifics but said, “It’s high time that we do something different in education, and I believe that allowing capital and industry to go into education is hopefully something that’s going to work.” Through the school, Liemandt declined to be interviewed. An email to Andrew Price went unanswered, as did messages sent to an email address and a contact form on Trilogy’s website.
In August, a spokeswoman for Alpha, Anna Davlantes, told me it was “inaccurate” to characterize the school as an in-house distribution channel for a corporation. While she didn’t respond to a request for comment on Trilogy’s ownership of the school and 2HR Learning, she said that Trilogy had stopped building software for both companies and had “no plans to stay in the educational software space.” Starting with the 2025-26 school year, she said, the school is “phasing out” Trilogy products and working with “a new company.”
Davlantes didn’t respond to a request for more information about that company, but recent press coverage and public filings may offer some clues. A publication called Colossus that profiled Liemandt in August said that he had lately been building ed tech products at a “stealth lab” staffed by about 300 people and was preparing to publicly launch a flagship product called Timeback. While the article didn’t name the lab, a Texas filing in early August recorded the formation of a company called TimeBack LLC, with Andrew Price named as a manager. A website for a product called TimeBack that fits Colossus’s description, meanwhile, calls it the system behind Alpha’s schools. And Legacy of Education has a trademark pending for the name. The article describes the product as recording a raw video stream of students, monitoring the “habits that make learning less effective, like rushing through problems, spinning in your chair, socializing,” then generating feedback for kids on how much time they’re wasting and how to do better.
Alpha’s privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.
Student information can be used not only to keep products functioning but also for other purposes, including to analyze users’ interest in Alpha or 2HR Learning technology “or content offered by others”; its operation involves sharing personal data with “business partners” for unspecified reasons. Davlantes said that student data is “fiercely protected” and that, in practice, it isn’t shared outside Alpha’s “educational system” and is used only for “providing student feedback and improving educational systems or outcomes.”
Across America, the private sector’s role in bringing AI into schools is only deepening. In June, Trump announced that more than 60 companies and organizations—including Microsoft, Google, OpenAI, MagicSchool and Alpha—had pledged to make resources such as AI products and training available to schools. In July, not long after the Supreme Court ruled that Trump could keep dismantling the federal Department of Education, Education Secretary Linda McMahon (whose main association with AI remains the time she went viral for pronouncing it “A1,” like the steak sauce) issued guidance detailing how districts could spend federal funds on AI.
The biggest AI companies are also making back-to-school plans, ramping up their outreach to students and their families themselves. Google added studying-oriented features to its search platform’s AI Mode. OpenAI, in addition to announcing a deal to embed its models in the popular Canvas learning management system, introduced a study mode.
The companies’ outreach is extending to the largest US teachers unions too. In July, Microsoft, along with OpenAI and Anthropic PBC, announced a $23 million partnership with the American Federation of Teachers (AFT) to create the National Academy of AI Instruction, which intends to train 400,000 teachers—about a tenth of the US total—over five years. Microsoft’s investment in that partnership is part of Microsoft Elevate, a new global initiative focused on AI training, research and advocacy, which aims to donate $4 billion over five years to schools and nonprofits. That initiative also encompasses a partnership with the National Education Association (NEA), which will include technical support and a $325,000 grant.
The president of the AFT, Randi Weingarten, said in an interview that she’s come to believe that AI will be as transformative as the printing press and that teachers should learn to use it. With limited government support for any large-scale training, she felt she had little choice but to turn to Silicon Valley. “Professional development done by teachers for teachers is actually the best thing to do,” she said, “but where are you going to find that money?” Daaiyah Bilal-Threats, senior director for national education policy at the NEA, characterized her union’s relationships with big tech companies around AI—it has worked with Google too—in part as a chance for teachers to influence product development. “It could be dangerous for them to be developing this technology without educator input,” she said.
MacKenzie and Andrew Price, meanwhile, are trying to expand into charter schools outside the Alpha brand. In applications to open schools across the US, they’ve described plans to rely on Trilogy products, positioning Alpha as evidence of past success. Five states, including North Carolina, have rejected the applications, but Arizona approved a virtual school called Unbound Academy. Meanwhile, Alpha itself is opening about a dozen new private-school campusesacross the US this fall, including one in New York City that Ackman is helping to promote.
This comes at a time when federal and state laws, including in Alpha’s home state of Texas, increasingly allow the use of public funds for private schooling. “Education’s a trillion-dollar industry—K-12 in the US,” Liemandt said at the Baja conference. “We have to go build 10,000 schools. Back to capital, we need a ton. Donations don’t get this done. We need, to build this, billions and billions of dollars.”
Some education researchers see dystopian overtones in all these developments. Alex Molnar, a director of the National Education Policy Center at the University of Colorado at Boulder, imagines one possible scenario in which everyone relies so heavily on AI that students can’t explain the thinking behind their assignments, teachers can’t explain the thinking behind their student evaluations, and administrators can’t explain the thinking behind their strategic decisions. All the while, local funds and data flow to faraway private corporations. “We essentially will have then transformed public education,” he warned, “from a civic institution into a portal for funneling money to private interests.”
But none of that is inevitable. A grassroots movement is growing among those determined to resist the proliferation of AI in schools. The Civics of Technology Project, founded in 2022 by educators and researchers, wants to instead have administrators, teachers, students and parents prioritize studying “the collateral, disproportionate, and unexpected effects of technology”—including AI. One option is to imagine, and work to bring about, an alternative future in which AI doesn’t dominate. “There are ways that teachers, caregivers and students, too, can say, ‘Well, what if I don’t want to have to use this technology?’” said Charles Logan, a research fellow at Northwestern University and a board member for the Civics of Technology Project.
In Colorado, Fairchild was entering the new school year feeling cautiously optimistic about AI. Students who began last year below grade-level expectations seemed to have improved their written and oral communication more than similar past students. A standardized test measuring students’ knowledge acquisition toward the end of the last school year also showed better results than in years past.
Yet, Fairchild wasn’t sure how much of any of this could be directly credited to MagicSchool. Rather, he suspected his students’ use of the platform had forced him to change his teaching. To feel sure they were depending on themselves, he’d pushed them to explain in assignments and discussions how their own backgrounds and experiences informed their perspectives. This is a recognized method for engaging students, he said, and the availability of AI had caused him to use it more. That, he suspected, was an important reason his students had done well. He’d done the time-consuming, impossible-to-scale work of becoming a better teacher.
I realized that, for all our conversations about how the students used MagicSchool, Fairchild and I hadn’t discussed whether he was using AI to generate lesson plans and so on, which the companies typically center in their training materials. When I asked him about this, he admitted he wasn’t. He doubted it would actually save him time, and he also had a deeper reason. “I have an artistic resistance to it,” he said. “For me that’s where the art of teaching sits—processing my students’ needs and building a lesson and then building a rubric and evaluation for it. For me that’s where the emotional and spiritual dialogue between the teachers and students is, so at this time, I’m unwilling to hand that off.”
CEI’s Roberts told me that rationale made sense to her. In fact, she said, she wasn’t using AI much herself. Having learned all she had about AI and its potential role in education, she’d arrived at a sharp critique of the technology. At one point, she texted me, “The negative impacts of tech always impact low-income Black and Brown communities first and more.”
Her candor was jarring, coming from someone so involved in one of the highest-profile statewide AI education programs in the US. She’d recently been promoted to become CEI’s director of district implementation and partnership, with the statewide AI program set to expand in the 2025-26 academic year. Plans include offering AI training to students and school counselors in addition to teachers. But while all this seemed to conflict somewhat with Roberts’ personal views, she said she’s constrained by the demands of American education culture.
It’s a culture in which, with ever-diminishing resources available for proven structural improvements, some educators find that AI assistance makes their life a bit easier and their students a bit more engaged. It’s also a culture in which schools are viewed less as a route to liberation than as a training camp for a future workforce. Assuming AI companies continue to dominate, the students Roberts cares about could graduate into a more precarious future if people like her don’t help them play along.
“If I could wave my magic wand and AI doesn’t exist, I’d be like, ‘Great,’” she said. In the absence of that, she said, she had a plan. This year she hoped to transform the Colorado program as much as she could. Even as she facilitated the advance of AI products into schools, she planned to raise awareness about the environmental impact of those products, the ideological influence of the corporations behind them and the possible negative impacts on learning. A term already existed to describe the kind of work she’d be doing. The job at hand, she said, was harm reduction.