Overview
& Funding
Our
laboratory investigates the mental processes that allow humans to live
collectively, with particular interest how they facilitate or inhibit social
change. We conduct experiments to better understand the social cognitive and
neural processes that underlie conformity and dissent, intergroup biases, and
moral decision-making. It is our hope
that by illuminating these processes, it may be possible to reduce some of
groups’ most harmful proclivities: mindless compliance, discrimination, and
moralistic aggression.
Our
research is funded by the National Science Foundation, the John Templeton
Foundation, the Social Sciences and Humanities Research Council of Canada, and
a Collaborative Opportunity Research Grant from Lehigh University. In the past, we have had funding from Defence
Research and Development Canada, and several Paul J. Franz Pre-Tenure Faculty
Fellowships from Lehigh University.
For
the especially interested, what follows is a detailed description of our
research program.
The Psychology of Dissent
Much
of my research focuses on the very beginnings of social change. I am interested how individual people decide
to speak up and challenge the status quo within their groups. Before social movements develop, small
minorities – sometimes of one – start by raising issues and criticizing their
groups. Although dissent is vital for
healthy civic and collective functioning, the psychological processes that
underlie dissent decisions have not, until recently, been well understood. I investigate when and why group members
decide to dissent. Specifically, I
employ a framework - the normative conflict model (NCM; Packer, 2008, 2011;
Packer & Miners, in press) that draws on theories in social psychology
and economics to explore the cognitive and motivational factors that cause
members to challenge their groups. Although often glorified in the abstract and
sometimes admired after the fact, dissent is typically met in the moment with
suspicion, denigration, even material or physical punishment. These reactions are predictable from standard
theories of group functioning, which posit that conformity to group norms is
motivated by the desire to exemplify a group identity and to maintain
collective efficacy and cohesion. But
they raise a puzzle: given the costs often associated with dissent, why does
anyone do it?
The
normative conflict model proposes that a key to understanding dissent decisions
lies in collective identification - the extent to which individuals feel
invested in, place value on and self-categorize as members of a group. By
adopting a social category as an important component of their identity and
experiencing a sense of concomitant commitment, strongly identified group
members are more likely than other people to take collective interests into
account during decision-making and, therefore, to act in ways that are
beneficial to the group. On this basis, I hypothesize that although strongly
identified members are generally motivated to perceive and present their group
in the most positive possible light, they may be willing to challenge social
norms and to criticize their group if they believe it to be in the collective
interest. My empirical work shows that
this is case. Packer and Chasteen (2010;
Packer 2009; Packer & Miners, 2012) found, for example, that strongly
identified group members are willing to express concern regarding group norms
when they perceive those norms to be harmful to the group. Other tests of the normative conflict model
conducted by outside laboratories have found that identified members are
willing to dissent when their group’s behavior fail to live up to their
collective ideals (e.g., Crane & Platow, 2010; Tauber &
Sassenberg, 2012). In ongoing research,
my lab continues to extend the NCM, examining how competing goals for stability
vs. change influence dissent decisions (Packer & Miners, in press;
Packer, in press; Packer, Fujita & Herman, in press; Packer, Fujita
& Chasteen, in press).
Intergroup Bias
A
second line of research in the lab investigates intergroup biases: the
pervasive tendency people have to preferentially attend to, evaluate and reward
their own groups at the expense of outgroups.
Although the past 50 years have seen a substantial decline in overtly
prejudicial attitudes in the United States, massive disparities (e.g., in
education, health, employment and justice) persist between groups. Understanding how and why disparities are
maintained despite positive shifts in intergroup attitudes is a critical task
for social scientists. My lab is
currently developing and testing a novel cooperative contingencies model for
predicting and reducing intergroup bias among otherwise low prejudiced people.
The
logic of the model is as follows: Life
in human societies hinges on cooperation – the ability for people to coordinate
their actions in mutually beneficial ways.
But cooperation is risky, often rendering people vulnerable to
exploitation. To mitigate risks and foster trust, humans have developed
mechanisms to enhance cooperative opportunities. Shared groups function as one such mechanism.
Specifically, humans show a marked
propensity to cooperate with fellow ingroup members, extending the reciprocity
and generosity that allow for collective enterprise. As such, because people can generally expect
greater cooperation from fellow group members, selectively affiliating and
coordinating with ingroup rather than outgroup members is often a pragmatic
decision, even for otherwise unprejudiced people.
Importantly,
however, we hypothesize that incentives to preferentially affiliate with
ingroup members are likely to be contingent on the full set of cooperative
affordances available in a particular context.
If shared group memberships afford the best available means of
facilitating cooperation between individuals, people are likely to
differentially rely on ingroup members.
In contrast, when groups are not likely to enhance cooperation or when
other effective mechanisms for fostering and sustaining cooperation are
present, people may be less likely to exhibit intergroup biases. There are a variety of mechanisms apart from
group bonds that incentivize cooperation, including third party punishment or
reward and, at a societal level, policing and legal sanctions. We have been
testing the provocative hypothesis that people exhibit reduced preferences to
affiliate and coordinate with ingroup (vs. outgroup) members when these sorts
of alternate social structures effectively support cooperation between
individuals. My lab has conducted a
series of initial studies that collectively provide compelling evidence for
these predictions. We find, for example,
that societies in which people can trust cooperation-enhancing social
institutions (e.g., the police and legal system) exhibit less bias toward
immigrants than more corrupt societies.
Further, experimentally creating cooperation-enhancing social structures
in the lab reduces intergroup bias among low prejudiced individuals (Packer
& Kugler, in prep).
Other
projects conducted in collaboration with Dr. Alison Chasteen (University of
Toronto) focus on how shifts in identity - as people anticipate leaving and
joining groups - affect attitudes and stereotyping, as well as self-evaluations
(e.g., Packer & Chasteen, 2006; Remedios, Chasteen & Packer,
2010; Packer, Chasteen & Kang, 2011). Research conducted in
collaboration with Drs. Jay Van Bavel (New York University) and William
Cunningham (University of Toronto) employs methods from cognitive neuroscience
to investigate brain processes underlying the rapid formation of ingroup
preferences when people join new groups. Using functional magnetic resonance
imaging (fMRI) of the brain, these studies suggest that the most minimal of
associations with a novel group leads to heightened activity in evaluative
brain regions, including the orbitofrontal cortex and amygdala, when viewing
ingroup (vs. outgroup) faces (Van Bavel, Packer & Cunningham, 2008; Van
Bavel, Packer & Cunningham, 2011). We have also found that a novel
group membership modulates activity in an area of fusiform gyrus critical to
face processing (commonly known as the ‘fusiform face area’), such that this
region responds preferentially to ingroup (vs. outgroup) faces. These studies shed light on the neural
mechanisms that underlie group-based biases and, ultimately, discrimination (Packer
& Van Bavel, in press).
Moral Evaluation and Decision-Making
Several
recent projects in our lab are also investigating the processes by which people
make moral evaluations, and the influence of those evaluations on decisions and
behavior. For decades, theorists
asserted that moral decisions were determined by conscious reasoning. More
recently, however, psychologists have argued that many decisions stem from
moral intuitions that are automatically triggered and enacted prior to
reasoning. A line of research conducted in collaboration with Drs. Michael Gill
(Lehigh University) and Jay Van Bavel (New York University) examines a
potential reconciliation of these two perspectives. We propose that: (1) conscious moral beliefs
can affect decision-making and action; (2) these beliefs are not necessarily
automatically activated, but are influential when people operate in a moral
mindset; and (3) these beliefs influence automatic attentional and construal
processes, which can increase their influence on behavior (Gill, Packer
& Van Bavel, 2012; see also Van Bavel, Packer & Ray, under
review). Our research thus takes a person-by-situation approach to moral
psychology, and proposes that moral mindsets—which can be adopted
intentionally, but can also be triggered by environmental cues including
prosocial behavior by other people — increase the strength with which moral
beliefs predict decisions (Gill et al., 2012; Packer, Gill, Chu & Van
Bavel, in prep.). In alternative (e.g., pragmatic) mindsets, moral beliefs may
go unexpressed (see also Van Bavel, Packer, Hass & Cunningham, 2012).
This model can account for instances when people fail to act on their values
because they do not consider the moral implications of their actions, and suggests
that ‘moral mindlessness’ may be overcome when moral mindsets give purpose to
consciously endorsed beliefs.
A
second project conducted in collaboration with my senior Ph.D. student, Justin
Aoki, is examining how morally relevant decisions are influenced by non-moral
(e.g., economic) considerations.
Currently dominant models in moral cognition posit that morality
represents a distinct evaluative dimension, such that moral concerns are
experienced as incommensurate with non-moral value (e.g., moral principles
cannot be converted into financial value).
Research on ‘taboo trade-offs’ appears to support this. People often report that they are unwilling
to commit or allow certain moral transgressions (e.g., to kill another person
or rig an election) for any amount of money.
However, whereas almost all extant research has examined these sorts of
trade-offs in terms of people’s willingness to forgo benefits, almost no
research has examined willingness to engage in actions to avoid costs. Well-documented asymmetries between costs and
benefits in many other psychological domains (i.e., losses are almost always
more psychologically powerful than gains) suggests that moral/non-moral
trade-offs may be experienced as less taboo when it comes to avoiding losses. In a series of studies, we have found that
people are indeed significantly more willing to commit moral transgressions to
avoid financial losses than to garner gains (Aoki & Packer, in
prep). Importantly, this pattern is
predictable from established models of economic decision-making (e.g., prospect
theory), suggesting that morality does not necessarily represent an entirely
distinct evaluative dimension, completely incommensurate with non-moral
value. Moving forward, we are attempting
to situate moral vs. non-moral forms of valuation with a broader model of
decision-making.
-
July, 2013