New publication: The Role of Thinking in Education: Why Dewey Still Raises the Bar on Educators

I recently received in the mail a copy of John Dewey’s Democracy and Education: A Centennial Handbook, which contains a chapter that MSU professor Dr. Jack Smith was generous enough to let me contribute to. This has been a fun process for a number of reasons. First, while this isn’t my first book chapter, it is the first book I’ve written for that I have a physical copy of. As much as I rely on PDFs to do my reading and writing, it’s a lot of fun to own a book that has your name in it. Second, writing about Dewey—and especially about Democracy and Education has been a fun way to step away from the educational technology writing that I usually do and to think more about education broadly, its civic role, and how it fits into a democratic context.

As an example of what I mean by that, here’s a blurb from the chapter that I’m particularly proud of:

In How We Think (LW 8), Dewey argues that as thinking matures, thinkers shift from a dependence on authority to a willingness to question, reconsider, and ultimately accept or reject the propositions accepted as true in the wider society. In Democracy and Education, Dewey focuses on the importance of autonomy—and its relation to thinking—in civic participation … for Dewey, thinking is not only the central organization principle of education; it is also the central capacity that students must exercise in asserting their rights and carrying out their responsibilities as full citizens in a democratic society.

I doubt I’ll have many more opportunities to write about the philosophy of education during my career (not least because I’d need/like to be much better read in that area), but I’m very grateful I got the chance to do so here.

New publication: A taxonomy approach to studying how gamers review games

Although most of my research has been focused on Twitter lately, I still have a foot in games and education, and some of my work there with Matt Koehler, Brian Arnold, Liz Owens Boltz, and George P. Burdell has just been published as online-first in Simulation & Gaming.

One of the main talking points for using games in education is that they’re—allegedly—more engaging and enjoyable than other ways of presenting information; however, it’s also commonly held that educational games aren’t as good as entertainment games. To help resolve this dilemma, we wanted to examine how players review entertainment games in order to see what features of the games they found most salient when judging their quality. Previous work has developed taxonomies of game features that can be tied to learning outcomes, so we put the theory to the test by seeing if the dimensions of one particular taxonomy were suitable for capturing the issues mentioned in player reviews from the gaming website VideoGameGeek.

We comment on our findings and on the implications for using game reviews as a data source in our article, which can be downloaded here!

New publication: Using TPACK to Analyze Technological Understanding in Teachers’ Digital Teaching Portfolios

Over the past four years, I’ve participated in research projects on a few different topics, but most of them can be grouped into the broad category of “digital educational research.” As I like to put it, this involves exploring how digital technologies afford not only new spaces for teaching and learning but also new ways of researching those spaces.

One of my first forays into this kind of research—during my first year—was exploring what we could learn about teachers’ technology knowledge (as understood through the TPACK framework) through what they include in their digital teaching portfolios. I’ve worked with Matt Koehler, Josh Rosenberg, and Sarah Keenan in the years since to follow through with our original efforts, and our findings were recently published in the January issue of the Journal of Technology and Teacher Education. We talk about what we found in terms of the technology knowledge that we were looking for and also (indirectly) comment on the implications of our digital research approach.

The article can be downloaded here!

Recent appearance on InnovativeEd podcast

Josh Rosenberg and I recently appeared on an episode of the InnovativeEd podcast to discuss some of our recent research on teachers’ use of Twitter. You can find the recording and a transcript at that link; while the headline is slightly misleading (#vted does lead in participants-per-teacher—assuming that our measure is accurate—but not in raw numbers), it is a fun way to get an overview of one of our recent publications.

New publication: Twitter Hashtags as “Just in Time” Teacher Professional Development

Over the past year, I’ve been blogging off and on about #educattentats, a Twitter hashtag created by a French social studies teacher in the wake of the November 2015 Bataclan attacks with the intention of helping teachers identify and share resources that would help them address the subject in their classroom.

I’m pleased to announce that a journal article examining this hashtag in greater detail has just been published in a special issue of TechTrends. My advisor, Matt Koehler, and I examined how this Twitter hashtag served as “just-in-time” teacher professional development, lending further insight into teachers’ use of Twitter as a professional development tool.

A read-only version of the article can be found here, and if you have a subscription to TechTrends, you can download the PDF here.

Mapping Twitter-based social learning spaces

Throughout this semester—and especially over the last few weeks— I’ve been using the digital humanities seminar I’m enrolled in to explore how building a map might help me better represent some of the Twitter-based social learning spaces that I study. In this post, I’d like to weave together some themes that have emerged from this class and my previous work; in doing so, I don’t expect to generate any profound or conclusive answers, but I do hope to lay a foundation for my thinking as I continue to fiddle with this medium and this genre of data.

So, here are the themes that I’m bringing with me into this post:

First, it’s possible to map Twitter data in interesting and compelling ways. This isn’t really new; I’ve been experimenting with mapping data from the #educattentats hashtag for over a year now. So, why am I continuing to come back to it? To be honest, part of this is because I’m learning fancier and flashier methods for mapping these tweets: Of the two maps below, the one on the left is clearly a couple steps above the one on the right, even if there’s still a lot of work to be done.

The second theme I have in mind is that the digital humanities teach us the importance of aligning methods and theory. I wrote about this at the beginning of the semester, and it’s been a theme throughout the whole seminar. In other words, “fancier and flashier” doesn’t by itself justify any particular set of methods—if I’m going to demonstrate that I really learned something from this seminar, I’m going to need to come up with a compelling theoretical reason for mapping this Twitter data.

This leads nicely into my third theme: It’s easy to fall short of that ideal when mapping Twitter data. Earlier this semester, I suggested that Kenya-Tweet (a DH project I was reviewing) didn’t make a clear theoretical contribution when it mapped tweets, and just a few days ago, I found myself trying to use a map to represent something that would have been better represented in some other form.

In short, I know that as I move forward with projects like these that there is a trap waiting for me… and that I’m prone to fall into it despite my awareness of it. I think the most helpful thing that I could do with the rest of this post is to review the phenomenon I’m studying, summarize why it’s interesting from a theoretical perspective, and then explore how building a map could help me represent that.

The Phenomenon

#educattentats is a hashtag that plays on the French words éducation (education) and attentat (terrorist attack). It was created by the Twitter user @padagogie on the morning of Saturday, November 14, 2015 in response to a series of terrorist attacks that had taken place in Paris the night before. As seen in the tweet below, @padagogie introduced the hashtag with a question to his fellow teachers: How are you going to react in your classes? The goal of the hashtag was for teachers to coordinate their efforts and pool possible resources as they prepared to return to classes (and teach their students) on Monday.

For other background information on #educattentats, I’m embedding the slides that I used to give a presentation on this subject at the 2016 AECT Conference. Of particular note in these slides is a summary of the scope of this hashtag. Over the first 28 days of its existence, about 3,600 unique Twitter accounts used (or interacted with) the hashtag, collectively posting about 1,200 original tweets and about 4,300 retweets. In the grand scheme of viral media on the Web, that may not be a lot, but in my mind, it is something to get excited about.

In fact, to speak informally, I personally find this phenomenon to be quite moving. In the face of a national tragedy, people throughout France and all over the world (one conclusion we can already draw from maps) leveraged a social networking site frequently—and jokingly—described as a place for people to post pictures of what they’re eating for breakfast as a resource for supporting and celebrating teachers as they engaged in talking to their students about terrorist attacks that had already begun to leave a deep impression on the national consciousness.

The Theory

From an education researcher’s point of view, #educattentats is interesting because it represents a social learning space that we’ve only recently started to conceive of. Twenty years ago, Greeno, Collins, and Resnick (1996) argued that educational research was headed in a direction that was increasingly taking into consideration the social dimensions of learning. Although scholars of learning like John Dewey and Lev Vygotsky had emphasized the role of social interaction in learning in the early 20th century, their work had either (respectively) fallen out of vogue or never come to prominence, and educational psychology was therefore dominated by a more individual view of teaching and learning.

One of the most influential ways of conceiving of social learning has been Lave and Wenger’s (1991) community of practice. According to this perspective, learning happens in groups that share professional (or other) practices. Those on the periphery of these groups learn thee practices from more experienced members and, in so doing, integrate themselves more closely into the community. The community of practice perspective makes some key contributions to our understanding of social learning: First, the group plays an essential role in the learning process. Second, learning is no longer restricted to formal settings (such as school); rather, it is something that is constantly happening as part of people’s everyday lives.

Although some scholars (e.g., Gao & Li, 2016) use the community of practice to describe learning that happens on Twitter, others (e.g., Carpenter & Krutka, 2014; 2015) prefer a newer perspective—Gee’s (2004) affinity space. Gee suggested that despite the utility of the community of practice, it had two important shortcomings. First, the idea of a community implies close connections and a sense of belongingness that are not always present in social learning spaces. This is true of physical spaces (for example, different students in the same classroom may not feel connected by the same goals or practices) but especially true of virtual spaces, where people may interact around the same subject despite having very little else in common. Second, Gee argued, few people used the community of practice perspective the way it was intended, which diluted its potency and contributions.

In contrast, Gee argued, it is often more instructive to talk about a space than a community. Talking about a space does not presuppose connections (which may not exist) among the people that collectively occupy that space but still allows us to discuss the learning that happens within. It is important to note that the affinity space is technically a subset of a social semiotic space—while the first term is catchier and has gained more traction, Gee argues that a space only qualifies as an affinity space when it meets several criteria that set it apart from other social semiotic spaces. It’s not without irony that researchers (myself included) have used Gee’s term without the proper nuance, given that that was part of the problem Gee was trying to solve.

The Method

To summarize up to this point, my intention is to use a map (the method) to display data related to the #educattentats hashtag (the phenomenon) in such a way that it emphasizes its nature as a social learning space that corresponds with Gee’s (2004) conception of the social semiotic space. In my mind, the most distinctive thing about Gee’s social semiotic space is the way that it breaks with earlier conceptions of learning. It breaks with a traditional classroom-centered, individual view by describing teaching and learning as activities that can occur in a wide range of physical and virtual spaces and through social interaction. It also breaks with the community of practice view in suggesting that people need not have much in common in order to learn from and teach each other within these spaces. This latter part is especially interesting in that it suggests that participants in an affinity space may be wildly different from each other in all ways except for their having engaged with the space. This allows for high levels of diversity and variety within the space, and that’s one thing I think a carefully-constructed map could get across.

So, what forms of diversity and variety could I display on a map? Broadly speaking, I think there are two kinds of picture that I’d like to paint here: a picture about the different participants that have engaged with this learning space, and a picture about the different ways that they have participated in the space.

Diversity of Participants

For all of the diversity that might exist within a classroom or a community of practice, we can still make basic assumptions about what all of those learners have in common. With a few exceptions, we can probably expect everyone in a high-level teacher education class to be a teacher-in-training and everyone in a teacher professional development workshop to be an in-service teacher.

We may be tempted to apply this same logic to the #educattentats affinity space; that is, we might feel comfortable assuming that everyone participating in the space is a teacher working in France—after all, that seems to be the intended audience of @padagogie’s original tweet. However, even a quick survey of #educattentats participants demonstrates that this is far from the case. Even as we identify exceptions to our assumptions about classrooms and communities of practice, I would argue that the scale of those exceptions would be dwarfed by what we see here.

Geographic Diversity

Let us first consider the question of geography, since this kind of data is the most well-suited to being displayed on a map. Geographic data for the #educattentats space isn’t as readily available as I would like, but it’s possible to make some estimates and then plot them on a map. The results are striking, especially if you are considering the assumptions that one might bring into an analysis of the #educattentats space. While activity is concentrated the most highly in France, there are participants from all over the world. This raises all sorts of questions around one main theme: Why are people outside of France interested in this? Obviously the Bataclan attacks got worldwide attention, but attention to this particular hashtag seems more surprising to me, and I think the answer to this question would be very interesting.

As a side note, mapping user profiles in this way has actually helped me have methodological insights, not just the theoretically-relevant ones I was hoping to achieve. I’ve always known that that my methods for estimating user locations were flawed, but this foray into Leaflet mapping has helped me associate points on the map with specific users in a way that I couldn’t with my previous maps. I’ve thus been doing a fair amount of spot checking to see if points in particularly interesting areas actually belonged there… and I’ve been a little bit disappointed with the results. I need to do some more work to figure out just how (in)accurate my estimation methods are; knowing this will help me know if I need to improve/replace my current methods.

I’ve also noticed that there are a few places where dots are superimposed. This usually isn’t a problem, but it does pose the danger of underrepresenting the participants in a very specific area or not finding the participant one expected when exploring a particular location.

Linguistic Diversity

Above, I asked why people outside of France would be interested in the #educattentats hashtag, given that the content of associated tweets is (mostly) directed towards teachers in France. If the content of the tweets is generally directed to teachers in France, we would also expect it to be mostly in French. We might then ask ourselves how many of the participants in this affinity space are native French speakers. Although Twitter metadata doesn’t tell us the birth language of its users, it does mention what language it users have Twitter set to, and that metadata is included in the data that I’ve collected.

In the current version of my Leaflet map (which can be viewed above), I’ve represented this data by assigning each user language a different color and plotting the dots representing these users in those colors. I think the results are instructive: The dominance of French in France, Spanish in Spain, Italian in Italy, and English in the UK are all to be expected, but they all also contrast with the blue (i.e., Francophone) dots throughout the world. This contrast shows us that there are both Francophones outside France (and often outside Francophone countries) and non-Francophones throughout the world who are participating in #educattentats, and I think that the presence of each of these populations is worth knowing about. There are also fun little glimpses into the diversity of the European linguistic landscape, like the cluster of Catalan-speakers in, well, Catalonia, the man with his Twitter profile written in English but his Twitter interface set to German, and the Spaniard living in Germany (who’s taken interest in a French hashtag).

One thing I’d like to mention here before moving on is how effective using different colors is for portraying this kind of diversity: It’s very easy to get a feel for this just by scanning the map. There’s no inherent connection between colors and language, though, so as I discuss the remaining forms of diversity that I’d like to be able to represent, one thing at the back of my mind will be whether something else deserves to be represented in this way instead.

Identity Diversity

While the last two forms of diversity implicitly address the assumption that only people in or from France would be interested in this affinity space, this one addresses the assumption that only educators would find value from participating in this space. In the published versions of this research, I’ve simplified this diversity by analyzing a sample of Twitter profiles and summarizing the different identities represented in them with a concise coding scheme. When translating this into an interactive map rather than a paper (or presentation), that doesn’t work as well. It wouldn’t do to only have this kind of data be present for a small sample of the points on the map, and I’d rather not have to code 1,600 profiles by hand. The compromise I’ve found for this version of the map is just to make it possible to access the Twitter profile corresponding to each participant by clicking on the corresponding dot. On one hand, this makes things hard to assess at a glance; on the other, it allows the person exploring the map to explore the participants on their own and develop their own conclusions.

I really enjoy this addition of the map, since it makes the participants seem a little more real. However, directly quoting participants’ Twitter profiles gets us into the tricky world of Internet research ethics. In most cases, the profiles are anonymized, and all of the data is public data, but traditional paradigms of research ethics don’t hold up very well in the realm of research on the Web (Markham & Buchanan, 2012), and while I feel comfortable doing this for the time being, it’s worth further consideration as I move forward.

There are also several participants whose profiles have some encoding errors in them. I did some very basic data cleaning when setting up my map, but I wasn’t as thorough as I needed to be.

Diversity of Participation

The participants in the #educattentats learning space are diverse not only in terms of who they are, where they’re from, and what they speak but also with regards to how they participated in this space. I would also argue that, like the diversity of participants, the diversity participation in an affinity space such as #educattentats is generally greater than the diversity we might see in a classroom or a community of practice.

Scale of Participation

Not all of the participants in the #educattentats learning space participated on the same scale. One Paris-based education researcher contributed more than 70 tweets and retweets to the space during the 28 days that I’m concerned with, but many of the participants (maybe even most—I’d have to recrunch the number) only posted (or reposted) a single tweet that contained the hashtag. I think it’s important to represent this kind of diversity, and the size of individual dots seems like the most intuitive way to show it.

However, making it a strict 1:1 relationship (i.e., increasing the radius of each dot by 1 for each additional tweet or retweet) creates some visibility problems: Our education researcher friend and similarly enthusiastic participants wind up with dots so large that they blot out surrounding dots, especially when the map is set at larger scales. For the time being, I’ve fixed the sizes so that they only range between 1 and 10, and that seems to be doing the trick, though it’s hard to make nuanced distinctions about the scale of participation. That is, it’s easy to tell those who participated a lot from those who only participated a little, but it’s hard to tell that Participant A only sent one or two more tweets than Participant B. Before I managed to fix the size to a certain range, I first explored the possibility of using color to represent the scale of participation, but the problem was even more pronounced there, so I’m happier with it the way it currently is.

The elephant in this particular room is trying to define what count as participation for these purposes. As mentioned above, it’s currently defined only as the number of tweets and retweets someone has sent out that use the #educattentats hashtag. I also have some data available for how often people “liked” tweets having the #educattentats hashtag, but it doesn’t lend itself quite as well to mapping, in part because I can only (currently) map dots for people who have tweeted or retweeted using the hashtag. In other words, I could use the “likes” data to boost the participation figures for people already on the map, but I’m not in a good position to add people to the map who only liked tweets without composing or reposting any. That feels incomplete, so I’m leaving it be for now.

On the other hand, though, liking-without-tweeting is an important part of this diverse range of participation, so it needs to get in there eventually. Are there other forms of participation, too? What about “replies” to #educattentats tweets that don’t themselves include the hashtag? What about “quote-retweeting” #educattentats tweets? It’s hard not to see these as a way of engaging with the learning space, though I’m not 100% confident I can capture the first and very doubtful that I can capture the second. Finally, Carpenter (2015) has explained that there are at least some teachers who learn from these kinds of spaces without ever leaving any traces of their engagement. It’s downright impossible to capture data on them (at least comprehensive data using the collection methods that I prefer), so it may be that I can only represent part of this range of participation.

Forms of Participation

By asking what counts as participation for the purposes of measuring the scale, I’ve already summarized some of the different ways that people might engage with this learning space. These forms range from the highly-active (composing an original tweet including the hashtag) to the highly-passive (reading #educattentats tweets without “liking” or retweeting them), though for practical purposes, I can concentrate on three different forms: tweeting, retweeting, and liking. I know from my previous work on this dataset that there are some participants who engage in all three of these forms of participation and some who only engage in one. None of that is currently represented on the map, and I wonder if it should be. On one hand, it would certainly be handy to be able to distinguish those who only “liked” tweets from those who actively composed their own tweets, but I’m not sure what the best way to portray that is. Different shapes might do the trick, but if my math is right, I’d need seven different shapes if I wanted to distinguish between all different combinations of tweeting, retweeting, and liking. In the end, it may not be worth it.

Another issue related to forms of participation that I haven’t yet represented is connections between different dots on the map. This is the sort of thing that would be better represented on a sociogram than on a map, so it could be set aside as a non-issue, but I feel like depicting network connections on this map would reinforce some of the attributes of Gee-style social learning spaces that I’d like to get across, namely the fact that a shared virtual space diminishes the importance of geographic distance between participants. However, my efforts to depict this have hit a couple of snags. It takes a while to calculate all of these connections, and trying to display them on an interactive map seems to slow the whole process down… not to mention serve as a visual distraction from the other features of the map. It’s worth exploring this further, but I feel like I’m at somewhat of a dead-end right now.

Other Considerations and Conclusions

There’s one more big issue that needs to be brought up when considering whether a map is the right way to go with these data, and that’s all of the users for whom I do not have geographic data. Does the payoff from seeing the geographic diversity of this learning space (since that’s the only thing that couldn’t be depicted with another visual representation) outweigh the costs of all of the participants that I can’t display?

There are several other questions worth adding to this one: Is a map really the most compelling way to depict the forms of diversity I’ve listed here? Does that answer change when I take into account the flaws I’ve seen in my methods for estimating geographic locations? Is insisting on a geographic element constraining my ability to represent other forms of diversity (e.g., social connections, scale of participation)?

These are tough questions, and I actually find myself rethinking a map more than ever before. Abandoning the map for some other form of representation (maybe a souped-up sociogram?) would probably let me more accurately and more fully depict some of the other forms of diversity that I value in that space. Who knows, I might even be able to find a way to include a geographic element—maybe something along the lines of different colors for different continents.

Despite all of this, I’d like to continue working with a map to see if I can’t overcome the challenges that I’ve identified for myself in this post. There’s something intuitive about a map that I think would help people get a sense for this space without needing too much explanation. Plus, for all our experience with the Internet, there remains a certain power in reminding people that they can use Twitter to learn from and communicate with people from entirely different continents. Finally, there’s something humanizing and personal about being able to see where these participants are from; to re-use a phrase from earlier, knowing where people are participating from makes it seem much more real.

So, onwards and upwards! Plenty of work to be done, plenty of questions to answer, and probably plenty more questions yet to be asked, but this feels like a step in the right direction.


Carpenter, J. (2015). Preservice teachers’ microblogging: Professional development via Twitter. Contemporary Issues in Technology and Teacher Education, 15, 209-234.

Carpenter, J. P., & Krutka, D. G. (2014). How and why educators use Twitter: A survey of the field. Journal of Research on Technology in Education, 46, 414-434, doi:10.1080/15391523.2014.925701

Carpenter, J. P., & Krutka, D. G. (2015). Engagement through microblogging: Educator professional development via Twitter. Professional Development in Education, 41, 707-728, doi:10.1080/19415257.2014.939294

Gao, F., & Li, L. (2016). Examining a one-hour synchronous chat in a microblogging-based professional development community. British Journal of Educational Technology. doi:10.1111/bjet.12384

Gee, J. P. (2004). Situated language and learning: A critique of traditional schooling. New York: Routledge.

Greeno, J., Collins, A., & Resnick, L. (1996). Cognition and learning. In D. Berliner & R. Calfee (Eds.), Handbook of educational psychology (pp. 15-46). New York, NY: Macmillan.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press.

Markham, A., & Buchanan, E. (2012). Ethical decision-making and Internet research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). Chicago: Association of Internet Researchers.

Scattered thoughts on operationalization in digital research (and more fiddling with #ldsconf tweets).

A few days ago, I presented my final project for digital humanities seminar I’ve been blogging about all semester. The project was built around a Leaflet map that I built in R (thanks to a fantastic package designed for that purpose) to display one of the Twitter datasets that I’ve been working with recently. I’d like to post the map once I’ve had a chance to improve it some, but what I’m writing about tonight is actually an experience that I’ve had adapting that map for a different dataset. One of the happy difficulties I’ve had in this seminar is not knowing what to concentrate on for my final project, since I’ve got a few different projects and corresponding datasets that I have access to, so I thought it might be nice to write up a post presenting a similar map for my #ldsconf dataset (especially since I’ve just been revisiting it).

Well, I did come up with the map I was aiming for, but in doing so, I learned something about how important operationalization (i.e., “defining the measurement of a phenomenon“) when one is doing digital research. Ever since I learned the term in one of my first-year classes, I’ve been fascinated with the idea of operationalization. Researchers train themselves to explain the work that they do in a concise and engaging way; for example, I might explain that I study teachers’ participation in professional development spaces on Twitter. However, translating that short statement into actual variables takes some thinking, defining, and assuming (in short, operationalizing). In fact, my research colleagues and I have spent quite a bit of time discussing how to operationalize “participation”. We can measure tweets and retweets, and we’re even making some progress with measuring “likes.” Do those all count as participation? Are we leaving out any kinds of participation (e.g., more passive reading) because of the methods we’ve picked? How we operationalize something can have a pretty big impact on the results we get and the claims we can make.

Let me take a step back and explain how this all came about tonight. The foundational idea supporting my recent work on Twitter is that Twitter hashtags can serve as social spaces—or affinity spaces—on the Web. In this sense, #ldsconf is no different than the educational hashtags that I’ve been examining; however, what sets #ldsconf apart for me isn’t so much the different content area (i.e., religion rather than teacher professional development) as it is the contested nature of that social space. Although the hashtag was presumably created for people to discuss the LDS Church in a positive light and in orthodox terms, a freely-accessible social space like a hashtag doesn’t filter for that, so there are many people who use the hashtag to criticize the Church, express their frustrations with Mormonism, or address contemporary or historical controversies. (As a side note, #ldsconf and its cousin #twitterstake were fascinating to watch in the week leading up to the November 2016 US Elections—many tweets during that time didn’t even address Mormonism so much as Mormons themselves, trying to convince them to vote for Hillary Clinton, Evan McMullin, or Donald Trump, depending on the account in question).

So, I thought the most interesting thing to do for this map might be to try to differentiate “pro-LDS Church” Twitter accounts from “anti-LDS Church” ones (for lack of better terms). In hindsight, a map wasn’t really the best way to represent this, but it did get me thinking about this idea of operationalization. How could I clearly and reliably identify these different kinds of accounts? More importantly for this “proof of concept” phase, how could I delegate this task to a few lines of computer code so that I didn’t need to read thousands of Twitter profiles on my own? In some cases, this was pretty easy, but in other cases, it was more difficult. Was having retweeted something from an explicitly “anti-LDS Church” account enough to qualify another account as “anti-LDS Church”? Is it safe to assume that attending Brigham Young University makes one “pro-LDS Church”? Suffice it to say that I did not make much progress on these questions tonight!

However, thinking about this issue reminded me that I’d actually been meaning to write about operationalization earlier in the semester. Several weeks ago, we visited the Digital Humanities and Literary Cognition lab on campus and learned about some their efforts to integrate neuroscientific research with questions germane to the humanities (see Phillips and Rachman, 2015). My notes from this class session are a flurry of short questions, most of which revolve around this question of operationalization. How do you interpret fMRI scans in the context of reading literature? I was (and still am) fascinated with this question, despite the fact that—being neither a neuroscientist or a scholar of literature—I have no idea how to answer it. Another snippet from my notes: How do humanists operationalize irony? This one intrigued me even more, because I wasn’t even sure that scholars in the humanities use the term “operationalization” in their work (especially outside the realm of the computational and digital). And yet, isn’t all scholarship, regardless of discipline, focused on making compelling arguments based on compelling evidence? Isn’t all human knowledge dependent on some kind of operationalization?

One of my favorite feelings to experience is a sense of wonder at something I had previously taken for granted, so while I wish I had somewhere deeper or more specific to go with these thoughts, I’d like to close this post with this feeling: this sort of awe I’m experiencing at the task that is before us as researchers, whatever our arguments, methods, or disciplines. It’s easy for me to make casual claims about #ldsconf or any of the other Twitter spaces I study—I’ve done so several times in this blog post alone. For those claims to mean anything, though, is going to take a lot of work and a lot of thinking. This particular project is no great contribution to the sum total of human knowledge, but if I do it right, I’ll be participating in a long and important tradition of humanity’s discovery of the world around us, and that’s an important thing to remember… especially during a stressful end of semester!


Phillips, N., & Rachman, S. (2015). Literature, neuroscience, and digital humanities. In P. Svensson & D. T. Goldberg (Eds.), Between humanities and the digital (pp. 311-328). Cambridge, MA: MIT Press.

Examining Mormon #ldsconf tweets through topic modeling

I’ve written previously about my exploration of a dataset of tweets I collected during the April 2016 General Conference of the LDS (Mormon) Church. Over the past couple of months, I’ve been trying to see whether topic modeling—a form of automated text analysis that identifies topics (or themes) in a corpus of texts—could be a useful technique for discovering themes present in these tweets.

I’m tempted to compare my personal debate between automated analysis and hand coding to the digital humanities distinction between close reading and distant reading (see, for example, Ted Underwood’s comments here), but the more I think about that comparison, the less it holds up. Underwood (in the interview I just linked to) describes distant reading as “the new perspective literary historians get by considering thousands of volumes at a time,” and that’s arguably the perspective that researchers in my field gain through both hand coding and automated analysis. That is, the research I do—whether I use automated methods or not—is almost always dedicated to finding general trends across large numbers of data points rather than the more traditionally humanist tradition (if I understand it correctly) of deep dives into a single work/text/etc.

That said, there is another of Underwood’s comments that did resonate with my own consideration of which methods to use to examine tweets. When asked to comment on the relationship between close and distant reading, Underwood explains that they supplement each other nicely—that close readings “help readers understand a long trend.” This seemed to echo my own experience with developing a topic model for #ldsconf tweets: Interpreting individual topics nearly always required that I take a look at individual tweets and often also required that I read over some of the talks being referenced in these tweets. It’s not a perfect comparison, but it’s an important reminder that human understanding is an important guide for computational analysis of data.


So, before getting to the fun stuff, it’s important that I acknowledge how I did things. I based most of my code on Jockers’s Text Analysis with R for Students of Literature, including using the mallet package for topic modeling. I used all of the #ldsconf tweets that I collected last time, though after eliminating retweets, I seem to have gotten 24,329 original tweets (as compared to the 24,958 I reported last time). That’s something I’ll have to figure out before going further with this. I also haven’t taken the time to figure out what to do about another problem I mentioned in my last post—in short, my Twitter tracker broke down during the General Women’s Session of the Conference, so that session is underrepresented in the current analysis (as are, consequently, female speakers and participants).

I’ve also done some basic data cleaning, removing all instances of #ldsconf, all links, and all punctuation other than apostrophes, hash signs, and at-signs from the tweets. This is another area where I need to up my game—for example, despite a couple of different tries, I haven’t successfully managed to remove #ldsconf from a tweet without keeping #ldsconference in its entirety. As a result, there are a few orphaned erence‘s floating around in the text, and that’s a pain.

So, in short, I’ve gotten it working well enough to try out, but there’s a lot of work to do before this is really respectable. (I’m also leaving out a number of methodological considerations for brevity’s sake…)


For the time being, I’ve asked the code to come up with 50 topics. That might be overdoing it, but previous fiddling has shown that 10-15 (the original range I was anticipating) was definitely not enough. I’d like to highlight about 5 of these topics (again, limiting myself for brevity’s sake), suggest some interpretation, and comment on implications for this as a research method for my future Twitter work. I’ll introduce each topic with a wordcloud that I used to help with interpretation (Jockers’s idea, not mine).

Topic 1: Spanish


This topic largely consists of tweets in Spanish, and my knowledge of Spanish isn’t enough to know whether this could be several Spanish-language topics lumped together (i.e., that the code simply dumps all Spanish tweets in the same bucket rather than make distinctions between them).

Topic 2: ???


This is actually a tougher one to interpret. I’m pretty sure that “amp” refers to ampersands that are being represented as HTML code and are thus not being properly removed. This is another thing that I’ve tried—but so far failed—to fix. The rest of the key words seem vaguely similar, but if I look at specific tweets matching this topic, it’s a little hard to fit them together. I’ll pass on this one for now.

Topic 3: Womanhood


This topic appears to focus primarily on a talk on womanhood by Neill F. Marriott, with other parts of the General Women’s Session (where Marriott spoke) bleeding in as well.

Topic 4: Music


This topic is clearly related to music, including mentions of the Mormon Tabernacle Choir, other choirs that sang during the conference, and even specific songs.

Topic 5: Hashtag Soup


This is a really interesting topic in that it doesn’t pick up on a common subject so much as a common practice. I call this hashtag soup—the practice of appending as many hashtags as possible to a tweet in the hopes of expanding one’s audience. I’m guessing that a lot of these tweets broadcast their message via an attached picture and use the text space for the “soup.”


These five topics represent pretty well how I feel at this point about topic modeling for tweets. On one hand, it’s identifying some really interesting stuff, including topics based on single talks (i.e., Marriott’s on womanhood), re-occurring themes (i.e., comments on the choir throughout the conference), and even distinct practices (i.e., using “hashtag soup” to amplify a message). These are compelling enough to me that I’m optimistic about the possibility of regularly using these methods for Twitter research.

That said, there is clearly a lot of work to be done. The second topic seems to be suffering from a lack of proper data cleaning (oops), and I’m sure there are other topics that won’t have a clear interpretation. So, I’m optimistic, but I also know there are questions to answer and problems to tackle before the real success stories come out.

The provocative role of the digital (in education and in the humanities)

This post is inspired by two other posts. The first is one written by my friend and colleague Leigh Graves Wolf, who wrote a about a Twitter discussion we were both involved in regarding whether educational technology was (already) its own field. The second is one that I recently wrote for the digital humanities seminar that I’m currently enrolled in and in which I suggest that a particular DH project might be too much digital and not enough humanities.

Based on my limited experience, this appears to be a common refrain in internal discussions about the nature, direction, and destiny of the digital humanities. Forte (2015) suggests that virtual archaeology “was born without an adequate theoretical background” (p. 296), Phillips and Rachman (2015) assert the importance of closely “integrat[ing] humanist questions” (p. 312) with digital methods, and Risam (2015) writes that the “relationship between theory and praxis is integral to the digital humanities” (para. 4).

I find this concern compelling, not least because it echoes concerns I (and others) have about my own field. Like in the humanities, research and practice in the field of education runs a constant risk of falling prey to buzzwords and unfounded assumptions—in short, there’s a very real danger that an educator exploring new technologies will become seduced by the shiny rather than guided by sound pedagogy.

Despite these concerns, I also want to make sure that our fear of “getting lost in the woods” doesn’t keep us from exploring there where there is not yet a path. For example, even if—in my previous post—I criticized the Kenya-Tweet project for not having a clear humanistic focus, I also described it as compelling enough to serve as an invitation to DH researchers to find a humanistic focus that those methods would serve well.

In this post, I’d like to flesh out that idea a little more and, in so doing, suggest that there is a provocative role to be played by those of us who incorporate technology into education, the humanities, or any other field. In so doing, I’ll draw from both the educational technology texts that have guided my graduate education thus far and the digital humanities texts that I’m just starting to read.

A rapidly changing world

The first point I’d like to make requires a visit back to Leigh’s post about whether educational technology is its own field. The discussion in that post more accurately revolves around whether ed tech has (or has not) already achieved this status, but I’d like to reframe the question a little bit differently: Should educational technology be its own field? I think that the answer is yes, and the best argument I’ve found for this is from Mishra and Koehler (2006), who suggest that teachers can no longer expect to use the same educational technologies at the end of their career that they learned to use at the beginning. The rapid “turnover” of today’s technologies requires that teachers be more conscious of their technology knowledge and, I argue, that researchers be dedicated to keeping up with this changing landscape.

Can the same logic be used to argue for digital humanities as a distinct field? Svensson and Goldberg (2015) describe DH as a “quickly evolving, contested and exciting field” (p. 1); while they acknowledge its roots as far back as the 1940s, surely it is the contemporary “quickly evolving” element of DH that (at least in part) makes it “contested and exciting.”

In short, things are changing so quickly in both education and the humanities that it’s worth having a subset of these disciplines that concentrates on the changes themselves.

A reciprocal relationship

So, perhaps educational technology and digital humanities are fields that are informed by their parent disciplines but distinguish themselves by their dedicated attention to how these parent disciplines are being changed by the contemporary technological landscape. Describing the relationship between these fields and their parent disciplines in this way reinforces the idea (described above) that educational technology should not stray from the bounds of education, just as digital humanities should not stray from the bounds of humanities.

That relationship isn’t just one way, though. Salomon and Almog (1998) argue that just as educational psychology ought to inform effective applications of educational technology, educational technology is actively challenging the field of educational psychology to come up with theoretical explanations that simply did not exist before. Are the learning theories of the 20th century capable of explaining and predicting learning that happens on smartphones and tablets? There’s no way to know until we start learning with smartphones and tablets and start studying what that looks like… and if the educational technologist finds information that strays from the marked path, it may serve to invite the educational psychologist to consider whether the path ought to be enlarged, extended, or even replaced.

I would guess that a similarly provocative approach would be of value in the digital humanities. Kenya-Tweet (or “virtual archaeology,” or any other DH project or trend) might very well stray from the theoretical-methodological balance that is currently valued in the humanities. Yet, in using methods that do not fit perfectly with a humanist question of interest, are DH researchers inviting their colleagues to come up with questions that do fit those same methods?

Of course, this doesn’t mean throwing caution to the wind. Those of us who accept this provocative role should do so in a way that’s intentional, reflective, based on principle, and respectful of the established theory and conventions that already exist… but if our goal is to help the broader field stretch itself, remold itself, and improve itself, then we may do it a disservice by never going beyond what it is currently capable of explaining and understanding.


Forte, M. (2015). Cyber archaeology: A post-virtual perspective. In P. Svensson & D. T. Goldberg (Eds.), Between humanities and the digital (pp. 295-309). Cambridge, MA: MIT Press.

Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A framework for teacher knowledge. Teachers College Record, 108, 1017-1054. Retrieved from

Phillips, N., & Rachman, S. (2015). Literature, neuroscience, and digital humanities. In P. Svensson & D. T. Goldberg (Eds.), Between humanities and the digital (pp. 311-328). Cambridge, MA: MIT Press.

Risam, R. (2015). Beyond the margins: Intersectionality and the digital humanities. Digital Humanities Quarterly,9(2).

Salomon, G., & Almog, T. (1998). Educational psychology and technology: A matter of reciprocal relations. Teachers College Record, 100, 222-241.

Svensson, P., & Goldberg, D. T. (Eds.) (2015). Between humanities and the digital. Cambridge, MA: MIT Press.

Exploring Digital Humanities Projects: Kenya-Tweet

One of my latest adventures in the Digital Humanities seminar I’m taking this semester is to look at and reflect on a digital humanities project that is germane to my own work. Since my current work focuses mainly on Twitter, I was impressed by Kenya-Tweet, a project by graduate student Brian Geyer that came out of MSU’s Matrix lab and Cultural Heritage Informatics graduate fellowship.

Because Kenya-Tweet is not currently working, I’m drawing my knowledge of the project not just from the project’s official website (which has just enough documentation to be helpful), but also an email exchange with Brian, and the YouTube video I’ve posted below, which is a presentation he made about Kenya-Tweet. I’ll let the (short) video introduce the project:

With that background in mind, it’s time to ask some questions about the project:

What are Kenya-Tweet’s strengths and weaknesses?

I think that the strengths and weaknesses of this project are not only very complementary to each other but also representative of one of the big struggles in DH (at least, as I understand it). On one hand, Kenya-Tweet has a really high “cool factor.” It’s an amazing combination of technologies that represents data in a really compelling way, and I think that the methods used to create Kenya-Tweet could easily be applied to many different sets of Twitter data of interest for DH researchers. However, this breadth is also one of the weaknesses of the project. If there’s one thing I’ve learned about the digital humanities, it’s that both of those words are important. While there’s a clear digital cool factor here, I’m not sure I see a compelling humanist question at its core. The ease with which this could be applied to another set of Twitter data without losing much of its impact seems to be an important weakness.

What assumptions does the Kenya-Tweet project make?

In my mind, the first assumption made by Kenya-Tweet is that displaying tweets geotagged as being in Kenya is an effective way of representing Twitter activity in Kenya. My experience with geotagged tweets suggests that this may be wildly underrepresenting Twitter activity in Kenya, and I think that’s important to communicate to people viewing the project.

There’s another big assumption in here, and that’s that public tweets are truly public (i.e, that those composing these tweets would have no objection to having them being publicly displayed in a research project like this). As the Association of Internet Researchers (among others) has pointed out, the notions of public and private get mighty tricky once the Internet gets involved. Kenya-Tweet isn’t necessarily showing unethical behavior, but there are some assumptions at play here that other researchers might challenge.

What is Kenya-Tweet’s primary audience?

The primary audience for this project is surprisingly multi-layered. On one hand, it could be considered to be an audience of one, since Brian suggests that he’s doing this project in part for himself as a proof of concept. However, he also suggests that his original conception of Kenya-Tweet was as an “applied anthropology” project meant to help safari drivers in Maasai Mara National Reserve in Kenya. That original conception didn’t work out, though, and Brian suggests in his presentation that he was hoping to find a way to make this applicable for researchers (even if he didn’t know exactly how he was going to do so).

How easy is it to use Kenya-Tweet?

Very. Granted, this is partly based on guesswork, since the project isn’t currently functioning and I’m making some assumptions based on the video, but I still feel pretty confident about this. The website is bare-bones and minimalist, but it’s meant to be that way—it even looks like it employs responsive design so that it can be easily used from a phone or tablet without too many problems. However, Kenya-Tweet’s ease of use feels related to its unclear humanist focus. It’s easy to look at “live tweeting” from Kenya, and even easy to be impressed by it, but if there were something to do beyond that, I imagine that it would be harder to do.

How does Kenya-Tweet connect to other work, either in DH or in its disciplinary field?

I get the sense that Kenya-Tweet’s connections with other work is mostly on a methodological level. I know that other DH Researchers at MSU (for example, Dr. Liza Potts and Kristen Mapes) have used TAGS (the enhanced Google Sheet that provides tweet information for Kenya-Tweet), and that Dr. Ethan Watrall has used some of the same mapping software to create an atlas in one of his archaeology classes.

What does Kenya-Tweet contribute to the larger body of knowledge in its disciplinary field? In the interdisciplinary field of digital humanities?

I think the biggest contribution that Kenya-Tweet makes to both digital archaeology and digital humanities is an invitation: How could archaeologists—and humanists more broadly—use these methods to answer important questions in our fields? Even if this project is (currently) more digital than humanities, it is a really, really compelling use of the digital, and I think it’s worth figuring out how to bring the humanities back into it.

Could I see using Kenya-Tweets as a model in my own work?

Absolutely. Kenya-Tweet is exactly the sort of DH project that I needed to shape my thinking about what I could produce on my own. On a very basic level, it’s helping me transition to thinking about projects rather than articles‐while I’ve toyed with a couple of ideas of how I could archive or represent some of the Twitter data I’ve studied, I’ve never let any of those thoughts come anywhere near fruition. The first thing that Kenya-Tweet is helping me do is to consider what researchers and (especially?) practitioners in my field could gain from a DH-style representation of educational Twitter data.

That said, I know that to do so effectively, I’ll have to respond to the invitation that I left myself responding to the last question. What important question(s) would “live-mapping” the tweets I study solve? Or could I remove the “live” part and still gain something from an interactive map that captures a slice of teachers’ (or Mormons‘) tweets over a certain period of time?

I think there are questions out there, but—like Brian—I don’t entirely know what they are. So, onward and upward! Time to start brainstorming what I could accomplish if I were to adapt Kenya-Tweets for my own work.