MIT: Playtesting

(For anonymity’s sake, some of the MIT-specific names and spaces have been changed or removed.)

There are several educational technology-based labs within MIT that collaborate regularly. Every month, one of these labs hosts a monthly playtest where researchers present their projects to people outside of the department (and outside of MIT) and allow these people (user testers) to test the projects for potential usability. While this is not a design space in the strictest sense – the projects in question were premade – the playtest space gives user testers a unique opportunity to see works in progress and offer feedback based on their own experiences and expertise. It’s a unique opportunity to share our knowledge and tools with the greater community.

As someone who has presented projects several times in these playtests, I have noted that we rely heavily on the user feedback in order to improve our games, simulations, and experiences. In other words, everyone is an expert based on their own lived experience. Whether the user tester in question is a student, teacher, professor, scientist, or concerned citizen, their input matters greatly to the future of each project. As a educational technology creator, I ascribe to iterative design – the theory that design takes a circular path and can be improve through continuous feedback, change, and reiteration. Change is constant and ongoing, and emergent from an accountable, accessible, and collaborative process.

However, I realize that these playtests are limited in terms of design justice. The fact that we present half-complete projects over potential ideas to users is limiting in itself; we create what we believe is useful and show it to a community that may or may not need it. While we do work with schools and organizations from the beginning of the design process, we could only do so much within the playtests to create community-led and -controlled outcomes. The users were required to work with our preexisting structures to offer criticism and feedback.

Another major issue was the imbalance of power that existed between the researchers/designers and the user testers. Although we regularly encouraged users to speak up and make their voices heard, this was not always easy. There were far too many times where users became intimidated by the technology and immediately shut down. I still recall the dozens of users who told me “I’m not a gamer” or “I don’t know computers” and refused to offer feedback because they believed that they were not “smart enough” to do so. The fact that users felt uncomfortable was entirely our fault. It is the priority of the designers to not only make users heard, but feel empowered to speak up.

The playtests provide an opportunity for teachers to tell us about what may “already be working for them” and for other educators. However, the limited demographic representation of user testers was yet another issue. Many of the user testers were MIT students and staff who heard about the event through email or word of mouth; others were outside-of-the-bubble locals who receive newsletters through the edtech labs or are friends and former colleagues of the designers. Many of the educators represented, thus, were from schools in the greater Boston area who regularly work with MIT. What of people who may not have had the means to come to MIT, or lacked the connections to receive news about our playtests? What of the children and students who would be most directly affected by our projects, who rarely if ever appeared at the playtests? There were large swathes of the Boston community that were strongly underrepresented among the user tester population which compromised our ability to deliver true design justice.

While playtests are only one step within the greater circle of iterative design, it is vital that we connect more deeply with the community we serve throughout the entire process and reach far beyond the MIT bubble to do so.