Over the past year or so, I’ve been working with data from the websites BoardGameGeek and VideoGameGeek. As both of these sites are crowd-sourced, there’s a huge amount of player-generated data about games, and I feel like there’s a lot that we can learn from it that has implications for educational games.
For example, earlier this year, Liz Owens Boltz, Brian Arnold, and I presented at a conference about our efforts to read through reviews of highly-rated and poorly-rated video games and determine what it was about those games that contributed to (or detracted from) player enjoyment. Enjoyment isn’t necessarily my main focus when researching educational games, but it is a big part of the conversation about games and learning, and the BGG/VGG data lend themselves pretty well to looking for patterns in enjoyment.
Coding these reviews takes a lot of work, and there’s a fair amount of ambiguity in what players are saying, so I’ve been exploring other ways of analyzing game reviews. As luck would have it, my online research methods class this semester is just now getting into text analysis, and my homework for this past weekend was to make a “comparative word cloud” based on this code from Gaston Sanchez. I tweaked the code to look at VideoGameGeek reviews instead of tweets, and fed it 350 reviews about highly-rated games and 350 reviews about poorly-rated games. The image below (click for a larger version) is the result!
A few things jump out at me right off the bat:
- The word “fun” appears way more often in reviews of poorly-reviewed games than in reviews of highly-reviewed games. This is counter-intuitive, to say the least… but I’m guessing it has something to do with a lot of people calling a lot of games “not fun.”
- I owe this one to Kathryn, who was helping me look over it, but there are a lot of narrative-related words in the reviews for “good games.” “Story” is way out to the right, and words like “plot” and “characters” are also making their way in that direction. Liz, Brian, and I also found that references to “game fiction” were higher in good games, but we also found that these references were more likely to be in a positive light. That is, it’s not necessarily the case that a bad story makes a bad game… but a good story certainly seems to make a good one!
These are some exciting results, and I’ve been working with my advisor, Matt Koehler, on getting access to a larger (much larger) corpus of these game reviews. I’m sure that I’ll be playing with other automated text analysis techniques throughout the semester, but I’m eager to start by making some more word clouds. Do these patterns change as I pump more reviews into them? How do board game versus video game reviews differ? What else can we learn about what players say?