Games & Embodied Cognition

What is it Like to be a Cat-Person?

Jonne Arjoranta is a PhD candidate at the University of Jyväskylä, Finland, where he researches and teaches games. His main research interest is studying the structures of meaning found in games, and he is working on a dissertation on game hermeneutics. He has published on play, meaning in games, game definitions and the media perception of games and violence.

bio-twitterbio-blog

I’ve played The Elder Scrolls V: Skyrim (Bethesda 2011) now for 81 hours, with three different characters. I’ve been an elf, an orc and a Khajiit. In Skyrim, I can occupy the body of a magical creature from a fantastic race with a different gender than mine. Yet it seems that no matter the differences between the fantastic races, the basic experience of what is it like to be each is essentially similar. A Khajiit might see in the dark, an Argonian may breathe water, and a Breton might resist magic, but they are all still humans wrapped in a layer of fantastic and endowed with supernatural power. The basic experience of being them is still the same.

However, this is a dubious representation of embodiment. A research tradition most often known as embodied cognition maps the ways our bodies affect how we think. Proponents of embodied cognition argue that our mental processing is affected by the fact that we are physical entities with bodies of a very particular kind. In this model of cognition–unlike in Skyrim–it matters what our bodies are like and how they can interact with the world. This embodied model of cognition is in direct opposition to the Cartesian model which clearly separates body from mind.

Seeing our body and mind in connection with each other is not a new idea. The phenomenological tradition from Husserl and Heidegger to Merleau-Ponty looks at cognition from an intentional standpoint. In clear opposition to Descartes, Merleau-Ponty (2005 [1945]) shows how perception is conditioned by the fact that we are beings with bodies. We perceive all other objects through our bodies: “I observe external objects with my body, I handle them, inspect them, and walk around them. But when it comes to my body, I never observe it itself” (104).

Another philosopher interested in the limits our physical nature places on our ability to form conceptions is Thomas Nagel (1974). In his seminal essay, “What Is It Like to Be a Bat?” he argues that as humans with certain kinds of bodies, we form certain kinds of concepts of the world. And perhaps even more importantly that we are unable to form other kinds of concepts because we lack the experiential anchors those concepts could be latched on to. For instance, perhaps a Khajiit philosopher with perfect night vision would not have used light as a central metaphor for reason, as Descartes did.

Researchers of embodied cognition argue that because cognition needs to work in time-sensitive, real-world situations with limited resources, it needs to conform to those limitations. To the extent that we share similar bodies and similar environments of experience (constant physical laws and similar physical needs) our embodied experience of the world is similar (Mandler 1992). Even when our bodies differ and individual experiences of embodiment vary widely, the sense of being embodied at all is largely universal.

These assumptions would lead to two conclusions with regard to Khajiit, Argonians and Bretons:

  1. Their cognition would have also formed in time-sensitive, real-world situations with limited resources, and would also be embodied.
  2. Their cognition would be subject to different limitations and possibilities, because they do not share identical physical needs and capabilities.

Khajiit see in the dark and Argonians can breathe under water. These two qualities would radically shape their cognition, just as our primary reliance on sight has shaped ours.

Often, proponents of embodied cognition connect slightly different arguments to the idea of embodied cognition. Wilson (2002) lists six common ones:

  1. cognition is situated;
  2. cognition is time-pressured;
  3. we off-load cognitive work onto the environment;
  4. the environment is part of the cognitive system;
  5. cognition is for action;
  6. off-line cognition is body based. 

Regardless of the differences between various approaches to embodied cognition, there is a unifying idea behind these approaches: that the form and structure of human cognition is, at least to some extent, based on our bodies.

One of the consequences of our cognition being built on our bodily experiences is that we are good at tasks where we can capitalise on that relation to our body. A very simple example of this is using our fingers to count. A game-related example is Kirsh and Maglio’s (1994) study of off-loading in Tetris. They found that experienced players off-loaded cognitive processing by rotating more tiles than inexperienced players. They call these rotations “epistemic actions”, because their primary purpose is to get new knowledge.

Handedness

All of my characters in Skyrim had bodies that differed from mine, but all three shared a quality that I do not have: they were all right-handed. I can customise the characters on a very minute level, but handedness cannot be changed. In Skyrim, the main weapon is wielded with the right hand, leaving the left hand free for secondary tasks, like carrying a shield.

I play Skyrim on my PC with a mouse and a keyboard. Because Skyrim by default uses the first-person perspective, it also borrows the default control methods from many first person shooters. Usually in first-person shooters the left mouse button fires whatever gun the character is holding and the right mouse button is typically used for aiming or secondary fire modes.

There is some room for customisation, but by default in Skyrim the left mouse button is used for attacking with a weapon in the right hand and the right mouse button is used for blocking. Because all the characters are right-handed, and the main weapon is wielded in the right hand, the player ends up controlling the right hand with the left mouse button and the left hand with right mouse button.

This also applies to casting spells. Spells are equipped by hand, so each hand can control one spell. Characters ready to cast spells stand with their hands lifted up, clearly visible on the screen. Because the control-mapping follows the same pattern here, the right hand is controlled with the left mouse button and vice versa.

Playing Skyrim with a gamepad avoids this problem: the left trigger controls the left hand and vice versa. The left-right-side correspondence would be easily achievable also on the mouse, but the convention of the left mouse button being for primary tasks prevents the easiest solution of remapping the buttons by default the other way around.

Theories of embodied cognition would predict that control methods that rely on our embodied experiences of our own body would be easier to learn and use. The simplest version of doing this is mapping the buttons on the controller so that they correspond to the player’s body.

Embodiment is also discussed in terms of immersion. Theories of immersion discuss how games try to make players immerse themselves in the game world, essentially consciously forgetting the mediation between the game world and the player. This metaphor is problematic in that it sees the relation as one-way (Calleja 2011). A better understanding of the relation would be to see the game environment as part of the player (in the player’s mind) and the player as part of the environment (through their avatar). This would also tie the concept to embodiment: players do not stop being embodied when they enter games. Actions in games are also affected by the players’ bodies.

Controllers and Embodiment

As the example with Skyrim shows, issues of embodiment can be taken into account even with traditional control methods. However, players today have a wealth of new controllers, from motion-sensing controllers, like PlayStation Move, to motion-sensing cameras, like Kinect. Modern controllers can give haptic feedback and receive voice commands.

New control methods enable new ways of interacting with games, making possible interactions that are more closely related to our embodied experiences of the world. New players may have a gentler learning curve if they can learn to control a game by simulating digital actions with familiar physical actions.

An example of trying to bridge the gap between the physical and the digital is the PlayStation Move Shooting Attachment, a gun-like add on that is attached to the Move-controller. With the attachment, the player can hold the controller like a gun, bringing the experience closer to holding a real gun.

However, the player must still use control sticks to both orient themselves in the game and to move around. This combination of using the gun and the controller for orientation and movement can be a disorienting experience.

Antle, Corness and Droumeva (2009) suggest some guidelines for designing embodied interfaces:

  1. Different interactions should be easily discovered by chance by a user trying out the system.
  2. There should be a structural isomorphism between the kinds of actions the user does and the kinds of actions that are simulated.
  3. There should be clear feedback from the actions. This makes it easier to accidentally find the possible forms of interaction and create a mental map of the structure of possible actions. 

They also note that interaction with an embodied system might lead to high levels of performative knowledge (i.e. ability to perform actions) without the corresponding explicit knowledge (i.e. ability to describe how to perform the actions). To me, this seems like a perfect example of how our cognition is embodied.

While the exact details and mechanisms of embodied cognition are still contested, it seems like a useful perspective for game studies. If our cognition is situated, time-pressured, body-based, action-oriented and done in conjunction with the environment, it is very suitable for dealing with games. It could also explain part of why we enjoy games: we have evolved to solve these kinds of problems. Regardless, game researchers and designers have to keep our bodies in mind. All games are played by people whose minds are shaped by their bodies.



logo

Discussant’s Reply

 James Paul Gee is Mary Lou Fulton Presidential Professor of Literacy Studies and Regents’ Professor at Arizona State University. His recent publications, including What Video Games Have to Teach Us About Learning and Literacy (2003, Second Edition 2007), have focused on video games, language, and learning.

We humans learn from experience.  We store records of our experiences, in an edited fashion, in our brains.  Editing means that we stress some aspects of the experience and background others, based on how we have paid attention during the experience.

An experience is best for learning when we have clear goals in the experience for actions we truly care about.  We also often need to be helped to pay attention to the right things in the right ways (this is what teachers, mentors, social groups, and tools are for).

We use our stored experiences to plan, think, and make hypotheses when we have new experiences.  As we get more and more experience in a given area, we gradually generalize from these experiences and store more general patterns (associations among elements of experience) in our heads.  Cognition (and emotion) is embodied precisely because we humans need and use our bodies to have experiences in the world and our neural networks are largely formed by experience.

An avatar in video game is a surrogate body.  Thus, an avatar can represent embodied cognition in a new and different way.  When players can manipulate avatars in quite minute ways, they often feel that their real body has been extended into the game world.  This is why young children often jump up when they jump a game character.  This extension of the body is similar to the way blind people come to feel that their bodies extend to the end of their canes.

The player’s surrogate body can allow the player to experience the game world in ways that are good for learning.  Games can complement the real-world for embodied learning.  Nonetheless, games represent a new and different form of embodied cognition, one that we need to study in its own right, a project to which Jonne Arjoranta’s intriguing paper contributes.

My own theory is that avatars that we players can tightly control allow us to take on “projective identities”.  Gamers have their own real-world identity (actually a number of different ones).  A game’s avatar comes with an identity set up by the game’s designers and also sometimes formed in part by the player’s choices.

The gamer has his or her own identity and connected goals when playing a game.  This is the gamer’s “I”.  The gamer also confronts goals that the game demands to be accomplished.  These are a project that gamer must accept if he or she wants to play the game.  Often these goals are caught up in the identity of an avatar, for example, Garrett in the Thief games.

However, gamers can project their own goals and values onto the avatar and meld them in their own way with the project the game hands them.  They can take up the game’s goals as their own and seek to accomplish them in their own ways.  They can even establish goals all their own, goals not demanded by the game.

Gamers can, thus, project themselves onto the game’s project, meld the two, and make the game their own.  The avatar now belongs partly to the game’s designers and partly to the gamer.  Alongside the player (“I”) and the avatar (e.g., Garrett) arises a new melded identity, what I call a “projective identity”: “I/Garrett”.  A new being has entered the world (a real-virtual being).  The full implications this identity has for embodied cognition, problem-solving, and learning remain to be studied.

Note: The claims in this response and the references that back them up are discussed at much greater length in Gee (2004, 2007, 2013).

 Discussant References

Gee, J. P. (2004).  Situated Language and Learning: A Critique of Traditional Schooling.  London: Routledge.

Gee, J. P.  (2007). What Video Games Have to Teach Us About Literacy and Learning.  Second Edition.  New York: Palgrave/Macmillan.

Gee, J. P. (2013).  Good Video Games + Good Learning: Collected Essays.  Second Edition.  New York: Peter Lang.


[Beginning in January 2014, every essay and commentary we publish on FPS will receive a response from a member on our board of discussants. Articles are paired up with a discussant based on subject-matter expertise and availability. The idea is to propagate a critical, constructive conversation that enriches both the author’s and the readers’ engagement with the text.]

Works Cited

Antle, A. N., Corness, G., & Droumeva, M. (2009). What the body knows: Exploring the benefits of embodied metaphors in hybrid physical digital environments. Interacting with Computers, 21(1-2), 66–75. doi:10.1016/j.intcom.2008.10.005

Calleja, G. (2011). Incorporation: A Renewed Understanding of Presence and Immersion in Digital Games . DiGRA 2011: Think Design Play. Utrecht: Utrecht School of the Arts.

Kirsh, D., & Maglio, P. (1994). On Distinguishing Epistemic from Pragmatic Action. Cognitive Science, 18(4), 513–549. doi:10.1207/s15516709cog1804_1

Mandler, J. M. (1992). How to Build a Baby: II. Conceptual primitives. Psychological review, 99(4), 587–604. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/1454900

Merleau-Ponty, M. (2005). Phenomenology of Perception. London: Routledge. Orig. Phénoménologie de la perception, 1945.

Nagel, T. (1974). What Is It Like to Be a Bat? Philosophical Review, 83(4), 435–450. Retrieved from http://www.jstor.org/stable/2183914

Wilson, M. (2002). Six views of embodied cognition. Psychonomic bulletin & review, 9(4), 625–36. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/12613670