The Ingroup is interested in exploring the effects of different conversational environments on subconscious forms of intergroup bias.  Specifically, we would like to study whether markers of bias are more pronounced when members of different groups engage in face-to-face conversation or when they interact in a computer-mediated setting.

Our prediction is that participants in face-to-face conversation will show less evidence of bias for several reasons. First, face-to-face conversation requires both parties to attend to more facets of maintaining the “face” of themselves and their partner, and this work will attune them to perceive the other person as an individual, not necessarily as just an example of a group. Also, face-to-face interaction provides a great deal more contextualizing information (facial expressions, body language, and so forth) that can assist in creating a personal connection that transcends simple social categorization. However, we acknowledge that the amount of contextual information available in computer-mediated forms of communication varies widely.

To study this question, we intend to identify polarizing issues of importance to Cornell students and then recruit students who feel strongly about those topics to participate in an experiment.  In the experiment, the participants will be given a limited period of time to interact with a person who belongs to an opposing group, and they will be asked to identify a creative way that they could compromise on the issue at hand. Afterward, participants will be asked to fill out a survey where they will rate their own communication and the communication of the other party.

We intend to measure two types of markers of intergroup bias, both of which have been established in the social scientific literature, infra-humanization and linguistic bias. Infra-humanization is a subtle form of bias where ingroup members subconsciously perceive the outgroup as less than fully human. To measure this, we will ask each participant to write down the emotions they felt and that they expressed in the conversation, and also to write down the emotions they felt the other person experienced and expressed. If ingroup members list a greater number of complex secondary emotions, such as guilt or indignation, for themselves and more animalistic primary emotions, such as anger, for the outgroup, it would be an indication of active infra-humanization. Another option would be to ask participants how typical they felt their conversation partner was of the outgroup as a whole.

Second, we will record the conversations of our participants using digital devices for face-to-face and chat logs for CMC in order to search for evidence of linguistic bias based on Semin and Fiedler’s Linguistic Category Model.  Under this model, we would expect references to the outgroup to follow a specific pattern where more abstract terms are used to describe negative qualities (e.g., Pro-choicers are evil) and concrete terms to describe positive qualities (e.g., that pro-choicer picked up that woman who fell down). The opposite would hold for the ingroup (i.e. Pro-lifers are righteous/that pro-lifer kicked the woman who fell down).

There are numerous questions we will need to settle to go forward with this study. What type of computer-mediated communication is best suited to this type of study? What ingroup-outgroup divisions will be most salient to the  people in our likely participant pool?  What resources will we need to carry off the study? Should we use more than one type of CMC? How can we be sure that any difference between CMC and face-to-face conversation isn’t merely the result of an anonymity effect?  If it is the result of an anonymity effect, is that a problem for our theory or a component of it? Finally, from a communication point of view, what theory are we going to ground our predictions in?

Advertisements