By Guest Contributor Dylan Arena, VP, Learning Science, McGraw Hill School
Personalized learning has long floated atop the fizzy pool of education buzzwords. Education researchers have been trying to personalize learning with “teaching machines” since Sidney Pressey introduced his “Automatic Teacher” in 1924. A century later, there are some ways in which it’s working remarkably well and others in which it’s totally overblown. The question of whether personalized learning is just a passing fad dates back years, too. In 2017, education strategist Michael B. Horn wrote about how the term had hit its peak of inflated expectations on the Gartner Hype Cycle and was due for time in that cycle’s “trough of disillusionment” before emerging to meet more reasonable expectations. Now, in 2022, I think we’re entering that phase of reason, where our understanding of personalized learning is more realistic, nuanced, and achievable.
In the five years since Horn’s article, overblown claims about robot tutors in the sky have been debunked, and the COVID-19 pandemic’s unprecedented jolt to our educational system—with students (and teachers) trying to work remotely while battling anxiety, boredom, fatigue, hunger, loneliness, and a thousand daily distractions—has demonstrated that to personalize learning we must see the person in every learner (and teacher). We have been reminded, in other words, that learning happens in messy contexts, that learners have different needs, and that we are all social creatures who learn best in stable, supportive relationships with our teachers, peers, and loved ones.
In 2017, Horn recommended that we think about personalizing learning as a set of actions that can adjust many variables, including the “time, place, path, and pace” of learning as well as more fundamental variables like how or even what to learn. Obviously, teachers cannot be expected to shoulder the burden of attending to all of these variables for each of their students. To act on a realistic expectation of personalization’s potential, school leaders, policymakers, and edtech providers will need an updated understanding of the kinds of student data and the types of instructional tools that teachers will need to personalize learning.
What We Need to Know About Learners
Horn and I both serve on the advisory board of Digital Promise’s Learner Variability Project (LVP), which has synthesized findings from learning science, data science, and cognitive science to fill gaps in education stakeholders’ collective understandings of what we must know about our learners to actively personalize learning. Specifically, LVP aims to help educators, researchers, and product developers better understand how students vary not only in academic areas (e.g., understanding of phonics) but also in their underlying cognitive factors (e.g., auditory processing and working memory), social and emotional factors (e.g., motivation and self-regulation), and student background (e.g., home literacy environment and primary language). This wider lens helps us design and/or foster learning environments to accommodate each student’s particular needs, strengths, preferences, and passions by acknowledging the influences of all elements of their experience, including how much sleep they get, how well they eat, what stressors they encounter outside of school, and how they feel about their relationships with friends.
Over the last fifty years, variability within and across classrooms has increased in essentially every measurable dimension: academic proficiencies, home languages, learning differences, physical challenges, and socioeconomic status, to name a few. Faced with this increasing diversity, educators and product developers may be tempted to “teach to the middle” by aiming to optimally support an “average” student. One of LVP’s central goals is to highlight the futility of this approach; it turns out that when you consider all of the dimensions along which students can vary, there simply isn’t an average student. LVP also makes clear, however, that we can accommodate our ever-more diverse learners with thoughtful instructional design choices about what, how, and why we personalize for each learner. These thoughtful choices will depend, of course, upon knowing much more about each learner than the basic academic data points our systems typically collect.
How We Can (Responsibly) Come to Know and Respond to Learners
Knowing these things about students in a way that allows us to personalize at scale requires school leaders and edtech developers to (responsibly) broaden the range of learning-relevant data that we use to make well-warranted inferences about where students are and how we can best support them on any given day. This doesn’t have to be complicated: A great product called Heartbeat by Shmoop asks students questions like how they’re feeling, if they’re sleeping and eating well, or if they’ve gotten into a fight with their significant other. Such questions (which are all optional) allow students to choose to share aspects of their lives that might be causing them to struggle, or to thrive, and help teachers to connect with them as people.
Although collecting data of this sort doesn’t have to be complicated, it does have to be done responsibly. Relationships drive human learning, and trust drives relationships, so any personalized-learning system that uses sensitive data must be designed to earn the trust of all stakeholders involved. Data privacy and security are table stakes: Losing or leaking sensitive data should be as unacceptable for educational institutions and edtech providers as it is for financial or medical organizations. But we must also design for intentional, conscientious, and narrowly scoped uses of sensitive data. One design principle might be only to collect any specific piece of sensitive data at scale once we have a clear, well-researched use for it—and only to use that piece of data for that purpose rather than storing it “just in case.” Another might be always to present data to stakeholders alongside sufficient contextual guidance for those stakeholders to take meaningful action. Without such guidance, sensitive data might be used to further marginalize a student (“well, with his working-memory trouble, Dylan clearly doesn’t have what it takes to read at grade level”) rather than offering insight into how best to support that student. Perhaps the most basic design principle for responsible use of sensitive data is to incorporate diverse perspectives into the design process, to maximize the probability that someone’s distinct perspective will allow them to catch what others with different perspectives might miss.
What Teachers Need from the Rest of Us
To responsibly personalize learning at scale, we will need to design digital technologies that enhance the very human relationships between teachers and students. Doing so will require changes in data collection, instructional design, technology development, and even policy, but it won’t require changes in teachers’ missions. Teachers have always sought to know their students, because they have always understood that personalizing learning is a matter of helping students take ownership of their learning while navigating their complex lives. Teachers have always known, in short, that their work is about relationships. Now it’s time for the rest of us to catch up—to build tools, systems, and environments that better equip teachers to meet learners where they are and help them get where they want to go.
**
For a deeper dive into the evolving conversation around the future of personalizing learning and the innovative solutions required to empower educators, join us for the McGraw Hill Innovation Conference this October. Register to attend in-person or virtually in the Metaverse here: https://mhinnovation.com.

Dylan Arena is a learning scientist with a background in cognitive science, philosophy, and statistics. He has done extensive research and development in next-generation assessment and giving meaning to learning-relevant data. Dylan developed software at Oracle, returned to Stanford for graduate school, and co-founded edtech startup Kidaptive, which was acquired by McGraw Hill in March 2021 to continue its mission of leveraging data to support learners and their teachers, parents, and other stakeholders. Dylan is or has been a youth mentor, tutor, substitute teacher, rugby/soccer/baseball coach, and advisor in the startup, nonprofit, and private-equity sectors.