This talk will examine the latest findings in the field of Multimodal Learning Analytics (MMLA), and their potential to support learning and teaching. I will focus on the challenges of implementing multimodal systems in real-world settings, and how we can translate lab findings into actionable data. More specifically, I will discuss what kinds of multimodal metrics we should pay attention to, how we can capture them in ecological settings, and how we can share this data with learners and teachers. I will propose some first steps toward addressing these challenges by describing ongoing projects at my lab (lit.gse.harvard.edu).
Bertrand Schneider is an associate professor at the Harvard Graduate School of Education. His interests include the development of educational interfaces (e.g., augmented reality, tangible interfaces) for collaborative learning in formal and informal learning environments (e.g., maker spaces). Additionally, he researches the use of multi-modal data, such as gaze, body movement, speech and arousal, to visualize and capture students’ learning processes