Using Multi-Modal Network Models to Visualize and Understand How Players Learn a Mechanic in a Problem-Solving Game

Zack Carpenter, Yeyu Wang, David DeLiema, Panayiota Kendeou, David Williamson Shaffer

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2502 Downloads (Pure)

Abstract

The incipient work in this poster aims to extend work on multi-modal learning
analytics by exploring how blending think-aloud, eye gaze, and log data in network models informs how players learn a game mechanic. Preliminary models demonstrate differing patterns between players who have learned and have not yet learned the mechanic.
Original languageEnglish (US)
Title of host publicationSociety for Learning Analytics Research
Pages99-101
Number of pages3
StatePublished - 2023
EventThe 13th International Learning Analytics and Knowledge Conference - Arlington, Texas
Duration: Mar 13 2023Mar 17 2023

Conference

ConferenceThe 13th International Learning Analytics and Knowledge Conference
Period3/13/233/17/23

Fingerprint

Dive into the research topics of 'Using Multi-Modal Network Models to Visualize and Understand How Players Learn a Mechanic in a Problem-Solving Game'. Together they form a unique fingerprint.

Cite this