
Exploring algorithmic literacy in an age of digital learning.
Role
UX Design · Content Strategy · UX Research
Team
Laura Carr + Marine Au Yeung + Amelia Barlow + Paige Ormiston + Jasmine Whiting
Context
Client: Artefact
Focus: Education · Designing for Youth · Responsible Design
Duration: 2020 · 13 weeks
Recognition
FastCo 2021 Innovation by Design Award: Winner of the Learning Category
Core77 Design Awards 2021: Winner of the Interaction Award
Overview
Opportunity
The Most Likely Machine was born out of an idea and interest in exploring Responsible Design in education. Responsible Design is a philosophy focused on using human centered design to achieve positive long-term outcomes. So how might we equip today's young students to be more informed digital citizens? We began with unpacking a topic we're all impacted by: algorithms.
Outcome
We created an interactive digital learning experience meant for independent use as well as an educator guided in-classroom/remote experience, inclusive of all modes of learning. The Most Likely Machine prioritizes giving agency to students to embark on their own journey learning about algorithms.
Approach
Research + Understand
Inspired by resources from the MIT Media Lab, Pew Research Center, the Algorithm Literacy Project, and Common Sense Media, the Most Likely Machine prototype built on robust research and curricula on teaching pre-teens about algorithmic bias. We used this research to understand algorithms and investigated a novel, creative, meaningful, and fun way to push this teaching.
Ideation + Concept Generation
As a team, we ran through many ideas and concepts before settling on the Most Likely Machine. I was instrumental in pushing our concepts to consider educational principles like creating context, providing a sandbox and incorporating reflection. My main contribution was establishing the theme of using yearbook superlatives and historical figures as a model that resonated well with principles of bias and winners/losers.
Validation + Refine
As the sole researcher on the team, I ran recruitment and two rounds of usability studies with students ages 10-14 to validate our concepts and learning objectives. In the initial round, I found students who were not familiar with algorithms prior to the study were not understanding the learning objective which led to some revisions in our design and more success in the second round of validation.
Design + Build
After refining our wireframes and interaction models, the team moved to design the algorithm and build the logic and create a visual language that allowed us to capture attention, make the abstract tangible, and breath fresh life into historical figures. As a team, we crafted the end-to-end experience, taking a mobile first approach and using Figma to work collaboratively before handing it to another internal team to build the live site.
Output
Creating engaging and relevant context
With the community of focus being students ages 10 - 14, we created the context of Millennium Middle School, knowing the yearbook superlative contest would be a familiar annual tradition. Except the twist, fellow classmates at Millennium Middle are all recognizable historical figures, which capitalized on existing knowledge we have about people like Albert Einstein and Rosa Parks for example, in other words, bias. The classmates are all vying for one of three awards: Most Likely to go to a Top University, Most Likely to go Viral and Biggest Troublemaker.
Building your algorithm piece by piece
Students are then put into the driver’s seat of creating their very own algorithm to determine who should win the yearbook awards. Prior to entering the main activity, students are asked to choose winners the old school way: by voting. Then, they forge on to create the algorithm to see which process is more fair, but little do they know, this process is them influencing their bias on what characteristics they believe the winner of each award should have. After they categorize the traits, they are set in an optimization activity to fine tune their algorithm before running it and seeing who the algorithm picks as winners.

Showing the mismatch between expectation and reality
Once results are spit out, students can observe the different results the algorithm they created produces versus their initial vote. More often than not, the results are not the same. Why? Because algorithms are influenced by the people who created them, and are simply a series of steps and calculations.This creates a moment of learning and investigation as students can dive deeper into why the algorithm picked it’s winner, illuminating how each step they went through influenced the algorithm’s decision making.
Connecting the dots and consequences
We created a reflection section for students to unpack their experience on the Most Likely Machine but also to learn the “so what.” We included examples of algorithms gone wrong that relate to each of the awards, demonstrating real life consequences that can have unjust, racist, and harmful impacts. For Most Likely to Go to a Top University, we talked about unfair grading, for Most Likely to go Viral, we highlighted YouTube’s often dangerous algorithm, and lastly with Biggest Troublemaker, we discuss racist policing and arrests.
Reflections & Impact
Designing for trust and inclusion through educational world-building.
I learned the value of participatory design, the importance of context building as a way of inviting investment, and how algorithms can be both good and bad actors.
Most Likely Machine was successfully piloted in classrooms, and we’ve built partnerships with non-profit organizations Kids Code Jeunesse and Technology Access Foundation (TAF) from our work.