0%

我们迫不及待地期待另一个密西西比奇迹

原文标题: We can’t wait for another Mississippi Miracle
来源: eSchoolNews | 发布时间: 2026-04-17
原文链接: 点击阅读原文


  • Share on X (Opens in new window)X

  • Share on Facebook (Opens in new window)Facebook

  • Share on LinkedIn (Opens in new window)LinkedIn

  • Email a link to a friend (Opens in new window)Email

  • Print (Opens in new window)Print

Key points:

  • Strong AI will only come from strong data grounded in learning science

  • A new need-to-know for the AI classroom

  • How educators are shaping the future of edtech

  • For more on AI’s role in successful learning, visit eSN’sDigital Learninghub

Recent findings on thenegative impacts of AI on learningmight be sparking national debate, but they are unsurprising to learning scientists. In fact, these results highlight a long-standing U.S. trend of using what “feels right” or “sounds good” instead of following well-established education research.

A recent Massachusetts Institute of Technologystudyshows that students’ over-reliance on technology in general, and AI in particular, bypasses essential learning processes during crucial stages of childhood and adolescent cognitive development. Examples of technology interfering with the development of critical thinking skills are abundant. They include replacing handwriting with keyboarding, reducing the importance of students’ automatic recall of foundational knowledge, and giving students answers before they engage in productive struggle.

AI should undoubtedly play a role in supporting learning, but it must be in a way that enhances, not interferes with, core learning science principles. Responsible and effective AI use requires strong data and oversight. When AI is grounded in a complete, accurate view of each student, educators can quickly make instruction more contextually relevant and deliver practice within each student’s zone of proximal development. Used this way, AI becomes a tool that deepens thinking, supports personalization, and accelerates meaningful academic growth.

However, to see the results of our failure to use technology in a way that follows, not circumvents, evidence-based guidance, one need only look at ourstudents’ declining test scores. Not only are scores lower absolutely, but also relatively to ourAsianandEuropeancounterparts who have wisely been managing technology usage, particularly for younger students.

What can happen when we embrace learning science

A notable exception to our collective national dissonance with the learning sciences is the widespreadadoption of the science of readingrequirements in more than 40 states since 2019. While this recent success provides a beacon of hope, the full history is far more complex and leaves much uncertainty about how schools respond to AI.

The “Mississippi Miracle” started the science of reading movement when Mississippi essentially went from worst in the nation to top 10 in NAEP fourth-grade reading scores in just six years. What’s less well-known is that the Mississippi adoption came 20 yearsaftertheNational Reading Panel reportleft little debate about the best way to teach students to read.

Even then, Mississippi’s spectacular performance wasn’t enough. Rather, it was the groundswell of outrage from parents based on their firsthand experiences during the pandemic and spurred on by the 2022 podcast,Sold a Story, that led to the near nationwide mandate for evidence-based reading practices.

It’s unclear what spark could ignite a national mandate around AI and learning science. It might befamily pushbackagainst the $30 billion market for devices in schools. Or professional health advisories aboutAI and adolescent well-being.

The workers behind the ‘miracle’

To be clear, the Mississippi Miracle was no miracle. It came about through courageous leaders willing to put aside wishful thinking about technology and instead embrace the science–and associated hard work–of making systemic changes to properly teach kids how to read.

Glimmers of this courageousness are shining from organizations that lift up the most essential elements of effective learning, address ethical considerations around AI use, and highlightthe complexity of human thought, which integrates emotion, context, nuance, and embodied experience. For instance, the Collaborative for Academic, Social, and Emotional Learning recently dedicated several conference sessions to the connections between social-emotional learning and AI.

At the state and local levels, Mississippi legislators and education leaders did the boots-on-the-ground work. They changed literacy policies, implemented comprehensive strategies, adopted new standards, hired additional literacy coaches, and spent years honing communications and convincing families and educators to give the science-based approach time to demonstrate impact.

The real question now is not what works in education. The science of learning has already answered that. The question is whether we have the collective will to ensure AI in schools is guided by that same evidence–and fueled by the kind ofcomplete, high-quality student datathat allows it to truly support learning.

Strong AI will only come from strong data, grounded in learning science and used with intention. Without it, we risk repeating the very mistakes we are trying to solve.

Sign up for our K-12 newsletter

Δ

  • Author

  • Recent Posts

Nancy Weinstein is the founder and CEO of MindPrint Learning and is the Chief Innovation Officer at Otus.

  • We can’t wait for another Mississippi Miracle- April 17, 2026

  • Education in a connected world: Preparing students for global careers- April 16, 2026

  • The screen-time debate’s blind spot- April 15, 2026

  • Share on X (Opens in new window)X

  • Share on Facebook (Opens in new window)Facebook

  • Share on LinkedIn (Opens in new window)LinkedIn

  • Email a link to a friend (Opens in new window)Email

  • Print (Opens in new window)Print

Want to share a great resource? Let us know atsubmissions@eschoolmedia.com.


本报道由 AI 助手自动抓取、翻译并发布。

剖析高等教育与人工智能的复杂但有希望的关系

原文标题: Dissecting higher ed’s complex–yet promising–relationship with AI
来源: eCampusNews | 发布时间: 2026-04-17
原文链接: 点击阅读原文


Key points:

  • Universities are demonstrating widespread engagement with AI tools and technologies

  • Colleges are adopting AI faster than they can govern it

  • Creating educational value in a world of AI

  • For more news on higher ed and AI, visit eCN’sAI in Educationhub

​T​he California State University (CSU) has released findings from its first-ever systemwide survey on artificial intelligence (AI), marking the largest and most comprehensive survey to date on generative artificial intelligence (AI) in higher education. The survey draws on more than 94,000 responses from faculty, staff, and students, offering key insights into higher education’s relationship with AI—one that is both promising and complex.

The new report, “Ahead of the Curve: What the Nation’s Largest Public University System is Learning about AI,” comes at a time when colleges and universities across the country are determining how to prepare students for an AI-shaped workforce while preserving academic integrity, critical thinking and public trust. The CSU AI survey’s findings suggest the question is no longer whether AI belongs in higher education, but how institutions should lead its use thoughtfully, consistently and at scale.

“We launched the largest AI initiative in higher education last year to ensure that this extraordinary technology equitably expands opportunity for CSU students, bolsters faculty and staff excellence, strengthens the California workforce, and is implemented in a manner that reflects the CSU’s core values,” said Chancellor Mildred García. “Data must inform and guide our decision-making moving forward, and this survey—given its size—sets not just a CSU benchmark, but a national one. And it marks an exciting moment for the CSU, one that demonstrates our commitment to student success by boldly and thoughtfully leading through innovation.”

“The survey results reflect what we are seeing across our universities—​​widespread engagement with AI tools and technologies,” said Ed Clark, chief information officer for the CSU. “As artificial intelligence becomes increasingly embedded into every academic field and every industry, it is important for us to partner with our faculty, students, employers, industry sector leaders, and state and local government officials to better prepare our students and our community for this AI-infused environment.”

Developed by researchers at San Diego State University, the CSU AI Survey was conducted in fall 2025. The CSU’s more than 470,000 students and more than 61,000 faculty and staff were invited to participate in the survey, which asked questions in five core sections: awareness and understanding of AI; experience and usage of AI; perceptions and attitudes toward AI; skills education and training of AI; and future expectations of AI. Of the more than 94,000 respondents, just over 80,000 were students—85% undergraduates—more than 6,000 were faculty and more than 7,300 were staff.

“This survey captures a moment of transition in higher education, where both students and faculty are actively assessing how AI fits into teaching and learning,” said David Goldberg, SDSU AI Faculty Fellow, associate professor of management information systems and a lead researcher on the survey. “The data gives us a powerful foundation to better support faculty by tailoring training to real needs, bringing more consistency to AI use in the classroom, and ensuring that its use strengthens learning outcomes. It also offers a roadmap for institutions nationwide to better understand AI’s role and to implement it thoughtfully, consistently, and responsibly.”

Key findings

The survey results reveal that AI awareness across the CSU’s 22 campuses* is high and that most students, faculty and staff are engaging meaningfully with it. What is also clear from the results is that adoption of AI is not without concern. While engagement is high, respondents are taking a cautious approach to AI use, not entirely trusting AI’s accuracy and expressing the importance of verifying AI outputs. There is also a near universal demand for transparency, ethical use, and responsible regulation of AI.

The following are some of the survey’s key findings:

  • AI use is widespread.​More than half of students, six in 10 faculty and two-thirds of staff regularly use AI-powered tools.Ninety-five percent of respondents used at least one of the 21 AI tools listed in the survey.

  • ​More than half of students, six in 10 faculty and two-thirds of staff regularly use AI-powered tools.

  • Ninety-five percent of respondents used at least one of the 21 AI tools listed in the survey.

  • Demand for training is real, and the students who need it most want it most.​More than eight in 10 staff respondents and roughly seven in 10 faculty want formalAI training.About half of student respondents express the same interest—but first-generation students lead at 53% compared with 45% of non-first-generation students.

  • ​More than eight in 10 staff respondents and roughly seven in 10 faculty want formalAI training.

  • About half of student respondents express the same interest—but first-generation students lead at 53% compared with 45% of non-first-generation students.

  • ​Ethical lines are being drawn.​About 80% of student respondents are not comfortable submitting AI-generated work as their own.The majority of faculty, staff and student respondents say it is necessary to verify the accuracy of AI-generated content.

  • ​About 80% of student respondents are not comfortable submitting AI-generated work as their own.

  • The majority of faculty, staff and student respondents say it is necessary to verify the accuracy of AI-generated content.

  • ​Faculty are addressing AI in the classroom and guiding students on how to use it.​More than half of faculty respondents use AI to develop course materials, and 69% provide students with guidance on how to use AI effectively.Two-thirds include an explicit AI statement in their syllabi.

  • ​More than half of faculty respondents use AI to develop course materials, and 69% provide students with guidance on how to use AI effectively.

  • Two-thirds include an explicit AI statement in their syllabi.

  • There is widespread belief that AI is the future—paired with job security fears.About 82% of staff respondents, 78% of faculty and 69% of students believe AI will become an essential part of most professions.​​82% of students, 78% of faculty and 74% of staff express concern about AI’s impact on job security.

  • About 82% of staff respondents, 78% of faculty and 69% of students believe AI will become an essential part of most professions.​

  • ​82% of students, 78% of faculty and 74% of staff express concern about AI’s impact on job security.

  • ​More than half of students, six in 10 faculty and two-thirds of staff regularly use AI-powered tools.

  • Ninety-five percent of respondents used at least one of the 21 AI tools listed in the survey.

  • ​More than eight in 10 staff respondents and roughly seven in 10 faculty want formalAI training.

  • About half of student respondents express the same interest—but first-generation students lead at 53% compared with 45% of non-first-generation students.

  • ​About 80% of student respondents are not comfortable submitting AI-generated work as their own.

  • The majority of faculty, staff and student respondents say it is necessary to verify the accuracy of AI-generated content.

  • ​More than half of faculty respondents use AI to develop course materials, and 69% provide students with guidance on how to use AI effectively.

  • Two-thirds include an explicit AI statement in their syllabi.

  • About 82% of staff respondents, 78% of faculty and 69% of students believe AI will become an essential part of most professions.​

  • ​82% of students, 78% of faculty and 74% of staff express concern about AI’s impact on job security.

Clickhereto view the full report.

In February 2025, the CSU launched a systemwide AI strategy across its 22 universities* to ensure that all CSU students, faculty, and staff have access to free AI tools and resources and to prepare students for a rapidly evolving AI-driven workforce. Since launchingCSU AI Commons,over 4,300 faculty have completed voluntary professional development in ethical and effective AI use. Training emphasizes equity, critical thinking, and academic integrity.

*Transition to 22 universities in progress (Cal Poly San Luis Obispo and Cal Maritime integrating)–official fall 2026​.

This press releaseoriginally appearedon the CSU Newsroom’s site.

Sign up for our newsletter

Δ

  • Author

  • Recent Posts

TheCalifornia State Universityis the nation’s largest four-year public university system, providing transformational opportunities for upward mobility to more than 470,000 students from all socioeconomic backgrounds. More than half of CSU students are from traditionally underrepresented backgrounds, and more than one-quarter of undergraduates are first-generation college students. Because the CSU’s 22 universities* provide a high-quality education at an incredible value, they are rated among the best in the nation for promoting social mobility innational college rankingsfrom U.S. News & World Report, the Wall Street Journal and Washington Monthly. The CSU powers California and the nation, sending more than 123,000 career-ready graduates into the workforce each year. In fact, one in every 20 Americans holding a college degree earned it at the CSU. Connect with and learn more about the CSU in theCSU newsroom.

  • Dissecting higher ed’s complex–yet promising–relationship with AI- April 17, 2026

  • What schools need to know about accessibility compliance as ADA deadline looms- April 15, 2026

  • Managing greenwashing risks in higher ed when revising sustainability targets- April 10, 2026


本报道由 AI 助手自动抓取、翻译并发布。