We recently talked to William McAuliffe about his paper, "Does cooperation in the laboratory reflect the operation of a broad trait?", which is scheduled for publication in an upcoming issue of the European Journal of Personality. William is a PhD student at the Evolution and Human Behavior lab at the University of Miami.
Read more about how cooperation measured in the lab compares to other measures of cooperation below!
Q: Hi William! Can you tell us a little about your study?
Sure! Generally what the study was trying to get at was whether individual differences in cooperative behavior (i.e., how much people want to pay the cost themselves to benefit other people) in economic game paradigms is associated with individual differences in cooperative tendencies as measured by self-report and peer-report. This question was of interest to us because our lab, like many other labs, uses economic game paradigms quite a bit. They’re probably the most popular measures of cooperation these days and basically they’re just laboratory measures that allow people to make decisions about whether to behave fairly or generously with other people without the usual kind of external incentives to do so. So in an everyday context there might normally be some social pressure to behave cooperatively or at least be polite, whereas in these games the idea is that you try to experimentally remove that by having participants make one-shot decisions about whether to share money with anonymous strangers. At least at first blush, then, the games allow researchers to get around the typical confounds that occur in cooperation research about why people are behaving prosocially. The games have also become quite popular because other behavioral measures aren’t as easy to implement, so economic games have allowed researchers, including in areas that typically don’t use behavioral measures like personality psychology, to use behavioral measures.
We were interested in seeing how prosocial behavior in these economic game paradigms compared to self- and peer-reports, because the latter are based on how people are behaving typically in general. It’s kind of a stark comparison between how you behave in this very specific laboratory situation in which the incentives are different than they normally are versus how you behave broadly in everyday life when social incentives are usually there.
What we found is that, unlike some other studies that have looked into this question, economic game behavior was related to self- and peer-reports, but only when people are playing the games for the first time. So when we brought back people to the lab, on average about a month later, to play the same games again, their behavior tended to not correlate with the self- and peer-reports. Doing that analysis was something that the action editor, Anna Baumert, suggested that we try, pointing out that if the games reliably measure a stable trait then they should still correlate with self- and peer-reports even after people get experience with them. This analysis gave us some additional evidence for what we thought could be going on in these economic games: Even though we are putting people into relatively anonymous contexts they’re still behaving as if the regular rules of social life apply because they’re just not used to making cooperation decisions that have no social consequences. That is, they’re still behaving as if someone might praise them for behaving generously or think they’re being stingy for behaving selfishly. Or they may think, “if I share with this other person, then maybe they’ll share with me down the line.” So they just haven’t gotten used to this really bizarre situation in which you’re being asked to cooperate with other people but there is really none of the usual social pressure to do so, and that just takes some time to get used to. But after participants got paid after their first session, they realized that no one was grateful towards them for being nice, and no one was mad towards them for being selfish. So variation in people’s cooperation decisions the second time around wasn’t as related to what they report doing in everyday life, because what people are doing in everyday life is trying to maintain relationships with people who can get angry with them for behaving poorly or can be happy with them for behaving generously.
So what our findings suggest is that when you have naïve participants you are measuring the same constructs as the self- and peer-reports measure – despite that it is a very specific situation in a new context. The second time around less so.
The biggest thing our findings brought up for me is that, because these economic game paradigms are fairly easy to program they get used a lot on online platforms like MTurk, where a lot of the participants have taken hundreds if not thousands of studies. There has been evidence that when you introduce experimental manipulations that should change people’s behaviors in these games, if you use experienced participants that those manipulations don’t work. There’s two ways to interpret that. One is that the manipulation doesn’t represent a real psychological effect, which could be true. But it could also be that the meaning of the games changes as people play it more. The implication for personality research is that if you want to measure cooperation behaviorally you need to ask yourself what you are trying to generalize to. Am I trying to look at individual differences in how people behave in everyday life or am I trying to figure out how people behave based on the incentives they are given? Our understanding of how other constructs are related to cooperative tendencies could be distorted if people don’t take this type of thing into account.
Q: What prompted this study?
The study was actually part of a grant being executed by the lab which I am a part of, the Evolution and Human Behavior lab at University of Miami. We were using a lot of economic game measures in all of the critical experiments. And so some of the grant reviewers, very astutely, asked “Well, how valid are these games really? They seem to be sort of taken for granted”. So we designed this study to speak to that, and I think honestly we came into this study somewhat skeptical that the games would really map onto self- and peer-reports because we had always thought these games maybe have some experimental demands (e.g., that people just want to please the experimenter and that is why they are behaving nicely). But then as we thought more deeply about it, I think we realized that there really is an important analogy between what it’s like to want to please an experimenter in the lab and what it’s like to just want to please people in your life in general. Once we thought about it from that perspective, I think in our own minds we became a little more optimistic about how well the economic game measures might map on to other cooperation measures.
Q: What is the next step in your research?
The topic of this paper is definitely something that we are still interested in, so one of the things that I’ve become tuned into from seeing how people’s behavior changes from when they are naïve to when they become experienced is that perhaps some of that variance that is not relevant to people’s cooperative tendencies per se is attributable to people’s learning style. So how quickly people learn, how they learn to adjust to the situations that they are in – this is a topic that is just starting to fully emerge in the cooperation literature. One implication of it is that even if you explain the games to people in as plain-spoken terms as possible (e.g., “here’s what you should do if you want to be selfish, here’s what you should do if you want to be cooperative”) people – perhaps because of how they have been socialized to behave in everyday life – in the novel context just don’t get it right away. So we have become interested in whether we can do some studies that look at how different styles of learning affect cooperation decisions in economic games. For instance, the difference between relying on model-free learning (which is basically like classical reinforcement where you don’t necessarily understand the relationship between your behavior and the outcome that you like) versus relying more on model-based learning (which is explicitly representing the relationship between your behavior and what you want, like “Oh, I explicitly understand that cooperating in this circumstance does not promote self-interests because of the anonymous nature of the game, but in this other game cooperation actually can be helpful because the other person can reciprocate later”). If you are engaging in more model-based learning, then you might habituate to the game incentive structure more quickly. That’s one possible future direction.
Q: Where do you see yourself more broadly in the near future?
That’s a great question, I wish I knew in greater detail! But generally speaking I hope to keep doing academic research, mostly on what differentiates prosocial individuals from less prosocial individuals. I also have a broader interest in general in moral development; how people figure out what the right thing to do is and how that relates to who decides to become interested in civics and becoming a contributing, prosocial member of the community, who volunteers who wants to become involved in the government, and who is willing to stand up for causes that they believe in, things like that. Basically I hope to continue doing roughly what I am doing now, but situated in an even broader context to try to get the big picture; not only how selfish you are, but also how you learn right and wrong.
Q: Do you have any advice for young scholars in the field?
I’m finishing my own PhD this year, so to the extent that I have advice it’s definitely only for people who are just starting out. I guess what I found most useful– and that definitely applies to how I worked on this paper – is to read very broadly. Read a lot not only within your own subdiscipline, but also other disciplines within psychology and even other disciplines in the social and biological sciences. When I started out, all I knew about was a really narrow literature within experimental social psychology that was studying what people do if you put them in a certain laboratory situation where you make them feel bad for another person. So my perspective on psychology in general was informed by this very narrow way of doing research on very narrow phenomena. As I moved along, I started reading about other areas of psychology that I wasn’t as familiar with and realized that there is a whole world out there and people are studying prosocial behavior in all sorts of different ways that are all really useful. Without that broadening experience, the way I thought about this project probably would have been a lot narrower and probably not as insightful. So if there’s a takeaway, it’s that because there is so much out there, so much to read, and we are so pressed for time in our programs, it’s tempting to just be like “I only have time to follow what’s in my little niche and everything else doesn’t really matter”. But I found that the exact opposite is true for me; many of the most informative, enrichening experiences I’ve had came from reading things, learning about things, and talking to people that at the time I didn’t know would have any particular pay-off to knowing about. By exposing myself to other people’s creative ideas, I was able to learn from their insights and apply them to my own work. So I guess my advice would be to try to make time to read broadly.